I have a matrix in the form of 2-by-2 block matrix
A = [O,W;J,D],
where, O is an n-by-n zero-matrix; W is a n-by-n diagonal matrix, W = w0***I**, where, w0 is a constant scalar, I is an n-by-n identity matrix; J is an n-by-n full matrix almost symmetric; D is an n-by-n diagonal matrix with different diagonal elements.
Due to some needs, I have to compute all the eigenvalues of A.
(Note: I don’t care about eigenvectors)
And by some manipulation that is omitted here), I finally found that, the eigenvalue of A can be calculated based on a intermediate results of its sub-matrix: J.
My claim is that, typically, the complexity of eigenvalue problem is O(n^3) for calculation directly on A, so when the matrix size become half, the complexity reduced roughly to 1/8*O(n^3) for calculation directly on J (overlook some overhead computational time).
The computational time in MATLAB (eig() command) supports my idea, that, by my approach, the overall computational time is reduced.
However, one of my colleagues challenged me , said: “For large systems, a dense matrix, even if smaller than the actual state matrix, can lead to numerical problems. It is preferable a larger but sparse matrix. “
Well, I have doubt on his comments/critics on my special case.
Yes, it is true that matrix A is “relatively” sparse, but the sparse pattern is not as usual as we often see in a symmetric, sparse matrix. Note that, here , both A and J are non-symmetric. (Most literature/books discuss sparse algorithm mainly towards symmetric matrix.)
Also, I search on Google for some literature, and find that, for symmetric, sparse , matrix, there exist some specially designed methods (sometimes towards special cases though, e.g. for tri-diagonal cases) which can obviously beat symmetric, dense/non-sparse matrix). However, I don’t see any literature exactly/explicitly support my colleague’s comment for general case,i.e., non-symmetric, dense matrix.
The sparse pattern of A matrix is shown as follows: