## Representation of Quantum entropy in terms of eigenvalues, i.e., $\text{Tr}(M\log M -M)=\sum_{i=1}^{n}\lambda_i\log(\lambda_i)$?

Quantum entropy or Von Nuemann entropy is defined as $$f(M)=\text{Tr}(M\log M -M)$$.

Where $$M$$ is a positive definite matrix in $$\mathbb{S}_+^n$$, $$\log$$ is natural matrix logarithm for which $$\log(M)$$ is defined as $$\log(M)=\sum_{i=1}^{n}\log(\lambda_i)v_iv_i^T$$ where $$(\lambda_i,v_i)$$ are eigenpairs of $$M$$.

Show $$f(M)=\text{Tr}(M\log M -M)=\sum_{i=1}^{n}\lambda_i\log(\lambda_i)$$.

## How do eigenvalues of a matrix X change if we linear transform the matrix X?

I have a matrix X which has eigenvalues U.
Now create a new matrix Y = AX where A is a nonsingular matrix.

How do the eigenvectors and eigenvalues of Y change in relation to the eigenvectors and eigenvalues of X and the matrix A?

## Eigenvalues of a partial differential equation

Why $$\lambda_n=sgn(n)\pi i \sqrt{n^2+\alpha}$$?

I have this:

$$\varphi_{xx}-(\alpha+\lambda^2)\varphi=0$$ and $$\varphi(0)=\varphi(1)=0$$ then $$\varphi(x)=c\sin(\sqrt{-(\alpha+\lambda^2)}x)+d\cos(\sqrt{-(\alpha+\lambda^2)}x)$$, and $$d=0$$ then $$\varphi(x)=c\sin(\sqrt{-(\alpha+\lambda^2)}x)$$ and $$0=\sin(\sqrt{-(\alpha+\lambda^2)}) \Leftrightarrow \sqrt{-(\alpha+\lambda^2)}=n\pi$$

$$\sqrt{-(\alpha+\lambda^2)}=n\pi\Rightarrow \lambda_n=sgn(n)\pi i \sqrt{n^2+\alpha}$$?

## Spectrum equals eigenvalues for unbounded operator

Let $$D$$ be an unbounded densely defined operator on a separable Hilbert space $$H$$. If $$D$$ is diagonalisable with all eigenvalues having finite multiplicity and growing towards infinity, does it follow that the spectrum of $$D$$ contains only eigenvalues?

## How to find orthogonal eigenvectors if some of the eigenvalues are the same?

I have an example: $$A=\begin{pmatrix} 2 & 2 & 4 \ 2 & 5 & 8 \ 4 & 8 & 17 \end{pmatrix}$$ The eigenvalue I found is $$\lambda_1=\lambda_2=1$$ and $$\lambda_3=22$$.
For $$\lambda=1$$, $$\begin{pmatrix} x\ y \ z \end{pmatrix}=\begin{pmatrix} -2\ 1 \ 0 \end{pmatrix}y+\begin{pmatrix} -4\ 0 \ 1 \end{pmatrix}z$$ For $$\lambda=22$$, $$\begin{pmatrix} x\ y \ z \end{pmatrix}=\begin{pmatrix} 1/4\ 1/2 \ 1 \end{pmatrix}z$$ However, those eigenvectors I found are not orthogonal to each other. The goal is to find an orthogonal matrix P and diagonal matrix Q so that $$A=PQP^T$$.

## Eigenvalues of products of exponential families

I have a question about a close cousin of the multiplicative eigenvalue problem.

Let $$U$$ be a special unitary matrix with diagonalization $$D = \operatorname{diag}(e^{2 \pi i a_1}, \ldots, e^{2 \pi i a_n})$$. The $$a_j$$ may be normalized so as to satisfy $$a_1 \le a_2 \le \cdots \le a_n \le a_1 + 1$$ and $$a_1 + \cdots + a_n = 0$$. These extra conditions have the advantage of producing a canonical sequence of logarithms: we may define a function $$\operatorname{LogSpec}$$ by $$\operatorname{LogSpec} U = (a_1, \ldots, a_n).$$

It also has the disadvantage of being not smooth. Given a point in $$\mathbb R^n$$ satisfying only the equality $$a_1 + \cdots + a_n = k$$ for $$k \in \mathbb{Z}$$, this point can be moved into by the region satisfying the family of inequalities (without modifying its image through $$t \mapsto e^{2 \pi i t}$$) by repeated reflection. Let’s call this assignment $$R$$, as in $$R\colon\thinspace \left\{ a_* \in \mathbb R^n \mid a_1 + \cdots + a_n = 0\right\} \to \left\{a_* \in \mathbb R^n \middle| \begin{array}{c} a_1 + \cdots + a_n = 0, \ a_j \le a_{j+1}, \; a_n \le a_1 + 1 \end{array} \right\}.$$ By consequence, curves like $$\gamma(t) = \operatorname{LogSpec} \exp\left(\begin{array}{cccc} it & 0 \ 0 & -it \end{array}\right),$$ which are smooth in $$SU(2)$$ before postcomposition with $$\operatorname{LogSpec}$$, become a kind of sawtooth function. For ease of reference below, I’ll call the image of a convex set through $$R$$ folded-convex.

Question: I would like to know a reference for (or, indeed, a proof of) the following result:

$$\DeclareMathOperator{\LogSpec}{LogSpec}$$??Theorem??: Let $$\xi_1, \ldots, \xi_m$$ be a sequence of $$n \times n$$ anti-Hermitian matrices, each exponentiating to a closed subgroup of $$U(n)$$. The assignment $$(t_1, \ldots, t_m) \mapsto \LogSpec \left( \prod_{j=1}^m \exp(\xi_j t_j) \right)$$ sends convex sets in $$\mathbb R^m$$ to folded-convex sets in $$\mathbb{R}^n$$.

In the classical version of the multiplicative eigenvalue problem, the set $$L_{m,n} = \left\{(\LogSpec U_j)_{j=1}^m \in \mathbb{R}^{n \cdot m} \middle| \begin{array}{c} \text{ U_j unitary}, \ U_1 \cdots U_m = 1 \end{array}\right\} \subseteq \mathbb{R}^{n \cdot m}$$ is shown to be convex by a clever application of symplectic reduction. The method of proof in Meinrenken and Woodward’s A symplectic proof of Verlinde factorization involves giving an explicit model for the moduli of flat connections on the trivial $$U(n)$$–bundle over a punctured Riemann sphere, then using its symplectic structure and a symplectic convexity theorem (suitably augmented to cope with loop groups) to deduce the convexity of $$L_{m,n}$$.

Their methods are especially well-suited to dealing with formulas like $$1 = \operatorname{Ad}_{c_1}(t_1) \cdots \operatorname{Ad}_{c_m}(t_m),$$ where $$t_j \in \mathfrak t_j \subseteq \mathfrak{su}(n)$$ are anti-Hermitian diagonal and $$c_j \in SU(n)$$ are special unitary. I’m new to this material and to symplectic geometry broadly, and so I’ve been unable to tweak these methods into saying something about this more restricted problem, where there are far fewer adjoint actions in play. Despite that, this seems like the kind of problem that would have attracted classical attention, and so I’m hopeful that there exists a resource that works this out. I’m also happy to hear about adjacent results—maybe I can make do with one of them.

Caveat lector: the theorem seems true in numerical experiment, but without a proof, there may well be edge cases unaccounted for. I’d be very, very happy to hear about those.

## How to collect eigenvectors corresponding to only real eigenvalues?

I have a set of eigenvalues which consists of real and imaginary values. Among these, I have one purely positive real eigenvalue and one purely negative real eigenvalue. How do I collect these two eigenvalues’ corresponding eigenvectors?

Thanks

## Eigenvalues of $AB+BA$?

Let $$A$$ and $$B$$ be square matrices that are both symmetric and positive definite. Is there a non-trivial bound on the smallest eigenvalue of $$AB+BA$$?

So, it is well known that $$AB+BA$$ does not have to be positive definite again, so there can be negative eigenvalues. However, I would like to understand what an optimal lower bound on the lowest eigenvalue of $$AB+BA$$ is? Because intuitively the eigenvalues should not be “too negative.”

## Convergence of Eigenvalues and Eigenvectors for Uniformly Form-Bounded Operators

Suppose that $$A$$ is an operator on a dense domain $$D(A)\subset L^2$$ with compact resolvent, and with quadratic form $$q(f,g):=\langle f,Ag\rangle$$.

Let $$(r_n)_{n\in\mathbb N}$$ be a sequence of quadratic forms on $$D(A)$$ that are uniformly form-bounded by $$A$$, in the sense that there exists $$0<\alpha<1$$ and $$\beta>0$$ independent of $$n$$ such that $$|r_n(f,f)|\leq \alpha\cdot q(f,f)+\beta\cdot\|f\|_2,\qquad f\in D(A).$$ Since $$\alpha<1$$ the operators $$A_n$$ defined by the form $$q+r_n$$ all have compact resolvent.

Suppose that the $$r_n$$ have a limit $$r_\infty$$, in the sense that $$\lim_{n\to\infty}r_n(f,f)\to r_\infty(f,f)$$ for every $$f\in D(A)$$. Define the operator $$A_\infty$$ through the form $$q+r_\infty$$. Clearly $$|r_\infty(f,f)|\leq \alpha\cdot q(f,f)+\beta\cdot\|f\|_2,\qquad f\in D(A),$$ and thus $$A_\infty$$ has compact resolvent as well.

Question. Does the uniform form-bound of the $$r_n$$ give rise to a dominated convergence-type result for the spectrum of $$A_n$$, that is, the eigenvalues of $$A_n$$ converge to that of $$A_\infty$$, and the eigenfunctions converge in $$L^2$$?

## Rank = # of non-zero eigenvalues of a diagonalizable matrix

Is the rank of a matrix equal to the # of non-zero eigenvalues of a diagonalizable matrix?