Proving a linear involution on a finite dimensional vector space $V$ over $\mathbb{C}$ is a self adjoint operator.

I’ve been trying to prove the title statement. I want to use the characterization of self-adjoint (Hermitian) operators:

$ T$ is a linear operator on a finite-dimensional complex inner product space V. Then V has an orthonormal basis of eigenvectors of $ T$ with corresponding eigenvalues of absolute value 1 if and only if $ T$ is unitary. And we also know that T is unitary if $ TT^*=T^∗T=I$ .

So we first will use Schur’s theorem, which states the following:

Let $ T$ be a linear operator on a finite-dimensional inner product space $ V$ . Suppose that the characteristic polynomial splits. Then there exists an orthonormal basis $ \beta$ for $ V$ such that the matrix $ [T]_\beta$ is upper triangular.

So since we are working over $ \mathbb{C}$ , we know that the characteristic polynomial splits. So we are guaranteed an orthonormal basis, but we don’t know yet that these are eigenvectors for T. We shall do this inductively. For our base case, we have that

\begin{gather} T(v_1)=\alpha_1v_1. \end{gather}

We know that for any $ \alpha_i,$ with $ i > 1,$ we have that $ \alpha_i = 0,$ since the matrix is upper triangular. So we proceed inductively and suppose that the first $ n – 1$ vectors of $ \beta$ are eigenvectors. So we have that

\begin{gather} T(v_n) = \sum^n_{i = 1}\alpha_iv_i. \end{gather}

Now applying $ T$ to both sides and using the fact that $ T$ is linear, we have that

\begin{gather} v_n = \sum^n_{i = 1}\alpha_iT(v_i) = \sum^{n-1}_{i = 1}\lambda_i\alpha_iv_i + \alpha_nT(v_n). \end{gather}

So after doing some rearranging, we see that \begin{gather} v_n = \sum^{n-1}_{i=1}\alpha_iv_i(\lambda_i + \alpha_n) + \alpha_n^2v_n. \end{gather}

I am fairly certain that this implies the preceding $ n-1$ $ \alpha$ ‘s must be $ 0.$ But I am not sure of the case when $ \lambda_i = -\alpha_n$ for all $ i.$ But if this holds, then all we need to do is show that $ |\lambda_i| = 1$ for all $ i.$ This is relatively simple as compared with the first part. We see that

\begin{gather} \det(T)^2 = 1 \implies \lambda = \pm 1. \end{gather} So I am fairly certain this proves the claim. However, was it necessary to assume the field to be $ \mathbb{C}?$ Because from my understanding the minimal polynomial associated to eigenvalue $ \pm 1$ splits over $ \mathbb{R}.$ So can we prove something stronger here?

How to calculate this n dimensional integral?

The integral such as enter image description here

It’s easy to evaluate the first few items

Integrate[x1 (1/(-1+n))^n (1/v1)^(n/(-1+n)) x1^((2-n)/(-1+n)), {x1, 0, v1}] Integrate[x1 Integrate[(1/(-1+n))^n (1/v1)^(n/(-1+n)) x1^((2-n)/(-1+n)) x2^((2-n)/(-1+n)), {x2, 0, x1}], {x1, 0, v1}] Integrate[x1 Integrate[Integrate[(1/(-1+n))^n (1/v1)^(n/(-1+n)) x1^((2-n)/(-1+n)) x2^((2-n)/(-1+n)) x3^((2-n)/(-1+n)), {x3, 0, x1}], {x2, 0, x1}], {x1, 0, v1}] 

but how do I calculate this integral where the number of integrals is aribtrary?

finite dimensional modules are highest weight modules

Let $ \mathfrak{g}$ be a basic classical simple Lie super algebra. I want to prove that every finite dimensional module over $ \mathfrak{g}$ has a highest weight vector.

My feeling is, since $ e_i$ ‘s are rising operators it will kill a non-zero vector and this will give us a highest weight vector and may be we need to use Lie’s theorem.

But I am unable to connect these things to get a perfect answer. If some one can tell me clearly what is happening here, that would help me a lot. Thank you.