I’m trying to analyze the time complexities of the two former kind of quantifiers, I need help figuring out if I’m following the right path or if I’m making mistakes, here’s what I’ve produced so far:

Let $ D$ be a random distribution over the natural numbers, Let’s proceed with defining two *Turing Machines*, $ A$ and $ E$ such that $ A$ implements the universal quantifier and $ E$ the existential one:

$ A$ accepts this language $ L_A = {\{D \space | \space\forall \space d \in \space D, \space d ≡ 0 (mod 2) \}}$ by implementing this function: $ f:D \rightarrow \{{0, 1\}} \space \space \space \space(f(D)= \lnot(d_1\space mod 2)\land \lnot(d_2\space mod 2)\land \lnot(d_3\space mod 2)\land ..\lnot(d_n\space mod 2))$

$ E$ accepts this language $ L_E = {\{D \space | \space\exists \space d \in \space D, \space d ≡ 0 (mod 2) \}}$ by implementing this function: $ f:D \rightarrow \{{0, 1\}} \space \space \space \space(f(D)= \lnot(d_1\space mod 2)\lor\lnot(d_2\space mod 2)\lor \lnot(d_3\space mod 2)\lor ..\lnot(d_n\space mod 2))$

We want to show that the $ \forall$ quantifier has a greater lower bound than $ \exists$ .

It’s easy tho Show that Running $ E$ on input $ D$ has an upper bound of $ O(|D|)$ since in the worst case I should review the entire search space (if only the last element of the set $ D$ is even). The lower bound of the same $ TM$ is instead constant: $ \Omega(1)$ (if i get an even number on the first clause that I examine).

As for machine $ A$ , things are a bit different: Here the acceptance of the language $ L_A$ has an upper bound which corresponds to the lower bound: $ \Theta(|D|)$ , that is because I must check *ALL* the clauses before being able to accept or reject the language with absolute certainty. Of course there is always the possibility of running a probabilistic algorithm but, also in this case, the “existential” algorithm is much more reliable than the “universal” one.

A possible probabilistic algorithm for recognizing $ L_A$ could be the following: I begin to verify the clauses, if I see that $ (n-1)$ clauses are verified, I accept. This is obviously a trivial version (if I arrived at $ (n-1)$ I might as well get to $ n$ as I would spend the same time but at least I would be sure to accept or reject the language) and it works in half the cases. If instead I arrive at $ (n-2)$ and accept, the algorithm correctly accepts the language in a quarter of the cases. In general this algorithm accepts the language *with probability* $ 1/(2^n)$ where $ n$ is the number of clause that I have not yet verified. This algorithm is extremely unreliable.

For the language $ L_E $ , instead, the situation is more favorable: I can accept the language directly (without even reading the input) with high probability: $ 1-(1/2^n)$ .

What written above, although I think it is true, does not have the value of a proof. I believe, however, that it is of fundamental importance to discriminate the complexity of the two quantifiers, think for example of *3SAT* and *3TAUT*: the first language is *NP-Complete* while the second is *coNP-Complete* and the only difference between the two is the change of quantifier, from existential to universal.

Finally my question comes: Were there theoretical efforts to prove the different complexity of the two quantifiers? Is it even possible to do this with our current knowledge?