Algorithm for checking positive definite matrix over a subspace

There is an algorithm that for any input matrix $ A \in \mathbb{R}^n$ satisfies $ x^\top A x>0$ for all $ x \in \mathbb{R}^n$ , e.g. by using Cholesky algorithm. Is there an algorithm that, for matrix $ A \in \mathbb{R}^n$ and a subspace $ V \subseteq \mathbb{R}^n$ , check if $ x^\top A x >0$ for all $ x \in V$ ?

Spanning a finite vector space using different powers of a diagonalizable matrix.

D be an n x n real diagonal matrix. show that there exists a x $ \epsilon$ $ \Bbb R$ $ ^n$ such that

span{D$ ^k$ x: 0$ \le$ k$ \le$ (n-1)} = $ \Bbb R$ $ ^n$

if and only if the eigenvalues of D are distinct.

I can show that if D has distinct eigenvalues then $ \Bbb R$ $ ^n$ isa direct sum of n eigenspaces and I can find the required x such that the condition holds, but i am unable to prove it the other way round.

Explicit computation of connection & curvature matrix

I have recently learned the generalized Gauss-Bonnet theorem, which states that:

\begin{equation} \int_M \text{Pf}(\Omega) = (2\pi)^n\chi(M), \end{equation} where $ n$ is half the dimension of an even dimensional, compact, Riemannian manifold.

Here, $ \Omega$ is the curvature matrix of 2-forms determined by the Riemannian metric $ g$ and some metric compatible connection $ \nabla$ , and $ \text{Pf}(\Omega)$ is the Pfaffian.

By Chern-Weil, we know that our choice of $ \nabla$ does not make any difference.

Question: The above integral, if the dimension of the manifold in question is 2, better reduce to the Gauss-Bonnet theorem that we know and love: \begin{equation} \int_M KdA = 2\pi\chi(M), \end{equation} where $ K$ is the Gaussian curvature. But I am not sure how I can carry out the computation necessary to get there…

More specifically, I know that if the dimension is 2, then $ \text{Pf}(\Omega)$ is gonna be a 2 form, more precisely, some number times $ \Omega_1^2$ , the upper-right entry of $ 2\times 2$ curvature matrix. If all were to work, this 2-form better be the form $ KdA$ .

By Chern-Weil, we may as well assume that the connection in question is Levi-Civita. Then the Theorema Egregium allows us to write $ K$ in terms of $ g$ and the associated Christoffel symbols.

My problem is that I don’t know how to carry out this explicit computation… Could you help me with this?

On a cartesian description of the null space of a matrix

Given a matrix A, I was taught that to find a cartesian description of the null space of the matrix, we would find the basis of the row space (which is the transpose of the rows with leading ones in the row reduced form of A) and since this basis is orthogonal to the null space, we would take the transposed basis and hence a cartesian description is obtained.

However, wouldn’t the rows of the original matrix A also provide a cartesian description of the null space (albeit with redundant equations)? Since the argument works not just for row reduced but for the original matrix as well. Is my understanding correct here?

determinant is a linear as a function of each of the rows of the matrix.

Today I heard in a lecture( some video on youtube) that determinant is a linear as a function of each of the rows of the matrix.

I am not able to understand the above statement.

I know that determinant is a special function which assign to each $ x$ in $ K^{n \times n}$ a scalar. This is the intuitive idea.

And this map is not linear as well. One way to see this is to consider the fact that determinant of $ cA$ is $ c^n\det(A)$

Can someone please explain what does the person meant by saying that determinant is linear as a function of each of the rows of matrix.

Transform $A = \begin{bmatrix} 1 & 1 & 1 \\ 1 & -1 & 1\\ 1 & 1 & -1 \end{bmatrix}$ into a superior triangular matrix by left multiplication

$ $ A = \begin{bmatrix} 1 & 1 & 1 \ 1 & -1 & 1\ 1 & 1 & -1 \end{bmatrix}$ $

This matrix can be transformed into a superior trangiular matrix through left multiplication by a lower triangular matrix $ L$ or by an orthogonal matrix $ Q$ . Find the matrix $ L$ and the matrix $ Q$ . Solve $ Ax= b$ with $ b=\begin{bmatrix} 1 \ -1 \ 2 \end{bmatrix}$

What I know how to do is to make $ A = LU$ and $ A=QR$ which are the known LU and QR decompositions. However, this exercise asks me to left multiply by $ L$ and left multiply by $ Q$ to obtain a superior triangular matrix. What am I missing?

How to make sure matrix completion can generate a matrix with values in expected range?

I am doing a matrix completion project. Assume that I have an incomplete matrix like

        func1    func2    func3 prot1     0        0        1 prot2     1        0        1 prot3     0        0        0 

I want to use Standard Matrix Completion to recover the matrix, like

        func1    func2    func3 prot1    0.1      0.9       1 prot2     1       0.2       1 prot3    0.3      0.8      0.7 

Standard Matrix Completion refers to

min 1/2 || W ||_F^2 + 1/2 || H ||_F^2 + lambda/2 || OMEGA * (W.*H^T – Y) ||_F^2

and X = W.*H^T.

However, I find that the recovered matrix X is not range between 0 and 1, say (just an example, not the truth)

        func1    func2    func3 prot1    -0.1     1.1       1 prot2     1       0.2       1 prot3    0.3      2.1      0.7 

How can I restrict the range (here 0-1) of unobserved entries in X (in particular how can I implement it in Tensorflow)?

Computational Complexity of a Matrix Multiplication

I am computing a matrix multiplication with inverse operation $ AB^{-1}C$ $ A \in \mathbb{R}^{m \times n}, B \in \mathbb{R}^{n \times n}, C \in \mathbb{R}^{n \times o}$ . So the inverse operation takes $ O(n^3)$ . Multiplying $ B^{-1}C$ takes $ O(n^2o)$ and multiplying $ A(B^{-1}C)$ takes $ O(mno)$ . So the overall time complexity is $ O(n^3 + n^2o + mno)$ . Kindly correct me if I am wrong ?