Diagonalization in oracle separation between QMA and PP

Reposting from cstheorystackexchange for more visibility:

Diagonalization is a very common technique to find oracle separations. For example, it can be used to separate $ \cal{P}$ and $ \cal{NP}$ , with the essential idea being that of constructing an oracle in stages and diagonalizing any $ \cal{P}$ machine against that oracle. Similarly, diagonalization arguments can also be used to diagonalize a $ \cal{BQP}$ machine against an oracle like the Grover oracle, and achieve a separation between $ \cal{BQP}$ and $ \cal{QMA}$ . I was wondering whether I can use diagonalization (against $ \cal{QMA}$ machines) to separate classes like $ \cal{QMA}$ and $ \cal{PP}$ , or $ \cal{QMA}$ and $ \cal{AWPP}$ . Is there any literature on these types of separations? A subtlety that I note is that for the diagonalization argument against $ \cal{BQP}$ machines, the essential idea is that the quantum machine cannot “search for a needle in a haystack”, meaning that if there are an exponential number of query states to keep track of, a quantum machine is almost blind to the change in any one of them. However, if you have a prover as well as a quantum machine, the prover can just “give you the needle”; meaning, the prover can just send you the right state to query. Can diagonalization still work in these settings?

As one of the answers suggested, a way to approach an oracle separation between $ \cal{QMA}$ and $ \cal{PP}$ is to consider a language $ L$ , for any language $ A$ , such that $ $ L = \{1^{n} : |A \cap \{0, 1\}^{n}| > \frac{1}{2} 2^{n} \}.$ $

Clearly, $ L \in \cal{PP}^{A}$ , and it is reasonable to think that $ L \notin \cal{QMA}^{A}$ and we can use diagonalization to show it. However, for me, the analysis of the latter is getting complicated as I do not how to mathematically model the prover’s action: the prover can send any quantum state and can potentially send a quantum state unrelated to the previous one when the oracle is changed during diagonalization. I am looking for any reference or any explicit proof of this non-containment.

Doubt regarding Cantor’s diagonalization argument

I understand the overall argument but have a problem regarding one caveat mentioned in my book.

My book says that some real numbers like 2.000… and 1.999… have different decimal representations but are actually the same real numbers. Suppose we have a bijection $ f: N \rightarrow R$ where $ N$ and $ R$ are the sets of natural and real numbers respectively. Now, suppose $ f(1) = 1.999…$ and no $ x \in N$ exists such that $ f(x) = 2.000…$ . In such a scenario it is possible that the diagonalization argument ends up constructing the real number $ 2.000…$ which is already in our list since $ 1.999…$ and $ 2.000…$ are the same real numbers. To get around this problem, we never select 0 or 9 when constructing our number.

I don’t understand two things. First, why do $ 1.999…$ and $ 2.000…$ represent the same real number? Second, how never selecting 0 or 9 solves the problem this poses?

Does the fixed point lemma / diagonalization require capturing or not?

Peter Smith’s formulation of the diagonalization lemma is essentially as follows, from Theorem 47 of his (fantastic) online book:

If theory T extends Robinson Arithmetic, and P is an one-place open sentence of T’s language (i.e., a well-formed formula in T), then there is a sentence 𝝍 s.t.:

T ⊒ 𝝍 ↔ P(⌜𝝍⌝)

Where the quine corner brackets represent the function that converts a well-formed sentence into a GΓΆdel numeral. He then defines Prf(x,y) as the T-sentence which is true when x is the GΓΆdel number of a well-formed T-proof which proves the well-formed T-formula which y is the GΓΆdel number of. And as he says on page 83, a well-formed formula in T corresponding to Prf(x,y) does in fact exist.

Then he defines Prov(y) as the provability predicate for T:

Prov(y) ≑ βˆƒx Prf(x,y).

But then here’s where I’m confused: If Prf(x,y) is a well-formed formula, then βˆƒx Prf(x,y) should be one as well. And so should its negation: Β¬βˆƒx Prf(x,y). And if Β¬βˆƒx Prf(x,y) is a well-formed formula, then why wouldn’t the diagonalization lemma apply to it?

Let’s use NProv(y) as shorthand for Β¬βˆƒx Prf(x,y). If the diagonalization lemma applies to NProv, then we get:

T ⊒ 𝝍 ↔ NProv(⌜𝝍⌝)

And this trivially leads to inconsistency: We can show that if T ⊒ 𝝍 then T ⊒ βŠ₯, and likewise if T ⊒ ¬𝝍 then T ⊒ βŠ₯.

It seems to me that the fix is to edit Smith’s Theorem 47 so that it says:

If theory T extends Robinson Arithmetic, and the numerical property P is captured by a one-place open sentence of T’s language (i.e., a well-formed formula in T), then there is a sentence 𝝍 s.t.:

T ⊒ 𝝍 ↔ P(⌜𝝍⌝)

Because then although T can express NProv by the wff Β¬βˆƒx Prf(x,y), it cannot capture it (meaning that (1) if 𝝍 is true then T ⊒ NProv(⌜𝝍⌝), and (2) if 𝝍 is false then T ⊒ Β¬NProv(⌜𝝍⌝)), and thus diagonalization wouldn’t apply.

But I see this mistake elsewhere as well. So what’s the deal? Can the fixed point lemma / diagonalization lemma be applied to any well-formed formula? And if so, why isn’t NProv a well-formed formula? Or does the numerical property diagonalization is applied to need to be capturable in T? And if so, why doesn’t Peter Smith’s version state this?

How is diagonalization a valid argument for the undecidability of the halting problem?

All proofs for the undecidability of the halting problem seem to be based directly or indirectly on self-reference.

def g():     if halts(g):         loop_forever() 

My question is: how can the input of a TM contain the TM itself and its input?

If the description of the TM has states and transitions, then it has more than its input therefore the input of the TM has infinite size because it contains both the TM and the input.

Simultaneous diagonalization of the tensor products of Dirac gamma matrices

Let $ \gamma_i\ (i=1,2,\ldots N)$ be the Dirac gamma matrices satisfying the Clifford algebra $ $ \gamma_i\gamma_j+\gamma_j\gamma_i=2\delta_{ij} I\ \ (i,j=1,2,\ldots,N).$ $ Then the tensor products $ \gamma_i\otimes\gamma_i$ commute with each other: $ $ [\gamma_i\otimes\gamma_i, \gamma_j\otimes\gamma_j]=0.$ $ This means all of $ \gamma_i\otimes\gamma_i$ can be simultaneously diagonalized by some similarity transformation $ S$ (probably not unique). I wonder what the characterization of such diagonalization is, such as the arrangement of eigenvalues of $ \gamma_i\otimes\gamma_i$ after diagonalization, the computational algorithm for $ S$ etc. I find surprisingly little about this elementary construction online, any reference will also be appreciated.

proving language K is undecisive using the diagonalization method

i have a problem proving the following properties of given language K:

$ K = \{< M > | M\ accepts < M >\}$

i am trying to prove that language K is turing identifiable but non decisive using the diagonalization method.

what i tried regarding proving K is undecisive using the diagonalization:

suppose there exists a turing machine H that decides K, denoted by H. then H accepts if M accepts M and rejects in any other case, for any input $ <M,<M>>$ ,so if D is a turing machine that negates diagonals, then we construct it as following:

  1. Run H on $ <M,<M>>$ .
  2. If H accepts then reject.
  3. If H rejects then accept.

we deduce that D is a decider because H is a decider, so D on $ <D>$ – accept, but: $ H<D,<D>>$ – reject, so it makes D on $ <D>$ , reject and that’s a contradiction, so language K is not decisive.

proving K is turing recognizable:

constructing a recognizer H for K: on input $ <M,<M>$ :

  1. simulate M on $ <M>$
  2. If M accepted – accept. if M rejected – reject.

so if $ <M> \in K$ then M run on $ <M>$ and halt in accept state, and if not in K, then just reject it or loop.

would very appreciate your input and explanations on how to show it is not turing decisive using the diagonalization method. tried to do my best and prove it the best i could.

Eigenvalues, diagonalization and convergence of matrices

I am trying to wrap my head around some basic results in Linear Algebra.

I am trying to avoid more abstract concepts like Rank-Nullity, and stay in simple properties at an introductory level. (I’ve taken more advanced Linear Algebra before, but it’s been a long while, just trying to brush up on some basic facts).

Suppose I have a $ n \times n$ matrix $ A$ .

Which of these facts are true?

1) $ \lim A^k$ exists if and only if all eigenvalues are strictly less than 1.

2) If there are at least 2 eigenvalues that are equal to $ 1$ , then the limit above does not exist.

3) If a matrix has all its eigenvalues less than or equal to one (not necessarily unique), then the limit above exists.

4) If a matrix is diagonalizable and all its eigenvalues are less than or equal to one, then the limit above exists.

I think that statement 4 is the only one that is correct (“easy” to show in an informal way by the fact that $ A^k = PJ^kP^{-1}$ ), but I’ve come across notes in Linear Algebra that state 1) and 2) as facts.

Diagonalization of symbolic matrices [on hold]

I’m having a bit of trouble diagonalizing a symbolic matrix in Mathematica. Somehow, whenever I input this code:

m = MatrixForm[{ {a,b},{c,d}}] d=DiagonalMatrix[Eigenvalues[m]] p = Transpose[Eigenvectors[m]] p.d.Inverse[p] 

I should something like the query “diagonalize{{a,b},{c,d}}” on WolframAlpha. However, I instead see:

DiagonalMatrix[Eigenvalues[(a b c d)]] 

With no option to evaluate this further. What am I doing wrong and how can I get an evaluated answer using Mathematica?