## find a maximum parameter for a range of target eigenvalues as a function of matrix dimension

I have a symbolic tridiagonal matrix of this form

a = 0; c = 0.25; sa1[b_] :=SparseArray[{Band[{1, 1}] -> a,            Band[{2, 1}, {2 l, 2 l}] -> {I b/4, I c},            Band[{1, 2}, {2 l, 2 l}] -> {-I b/4, -I c}}, {2 l, 2 l}]; 

where a and c are fixed parameters, b>0 is a varying parameter and l is the rank of the matrix. By diagonalizing this matrix as a function of b, I can obtain the largest value for b, let’s say bmax, such that the absolute value of the eigenvalues is less than 1/10^5. Note that the eignevalues always appear in pairs by symmetry. To find bmax, I have employed the Arnoldi method which has been employed also in this thread

closestEVtotarget[b_?NumericQ, target_?NumericQ] :=    Abs[First@     Eigenvalues[sa1[N[b]], -1,       Method -> {"Arnoldi", "Criteria" -> "Magnitude",         "Shift" -> target}]]; With[{target = 1/10^5},    Plot[closestEVtotarget[b, target], {b, 0, 0.5},     GridLines -> {None, {target}}]]; With[{target = 1/10^5},    plot = Plot[{target, closestEVtotarget[b, target]}, {b, 0, 2}];    bmaxval = GraphicsMeshFindIntersections[plot]]; 

For instance for l=6, printing the bmaxval would be

{{0.185999,0.00001}} 

where bmax=0.185999. Now the question is how I can collect all bmax values for different even values of l, let’s say

llist = {4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 26}; 

and plot l vs. bmax?

## Complexity of Inserting an element in a sorted matrix [duplicate]

• Find an element in sorted 2D-array (matrix) 1 answer

If in a matrix (m*n) having sorted rows and sorted columns then time complexity to insert new element?

## Number of compatible trees with an ancestry matrix

Suppose you are given an ancestry matrix $$M$$ which means that $$M[ij] = 1$$ iff node $$i$$ is an ancestor of node $$j$$. If $$M$$ represents no cycles (treated as an adjacency matrix) the corresponding graph is a tree (or a forest). My question is what is the number of trees where their ancestry matrix is $$M$$.

Is this question any simpler than counting number of directed graphs which are compatible to a general ancestry matrix?

## Build Vorticity Matrix for Markov chain

I have a markov chain with $$Q(u,v)$$ as transition probability matrix and $$\pi(u)$$ as stationary distribution. The dimension of matrix $$Q$$ is $$nxn$$ and vector $$\pi$$ is $$1xn$$.

I need to build a vorticity matrix $$\Gamma (u,v)$$ of dimension $$nxn$$ which has below properties

1. $$\Gamma$$ is skew symmetric matrix i.e, $$\Gamma (u,v) = -\Gamma (v,u)$$

2. Row sum of $$\Gamma$$ is zero for every row i.e, $$\sum_v \Gamma (u,v) = 0$$

3. Third property is, $$\Gamma(u,v) \geq -\pi (v)Q(v,u)$$

How to build vorticity matrix $$\Gamma (u,v)$$ which satisfies above three properties?

NOTE: Transition probability matrix $$P$$, and stationary distribution $$\pi$$ has below properties

Row sum of $$P$$ is one for each row, $$\sum_v P(u,v)=1$$ $$\pi$$ is probability distribution hence, $$\sum_v \pi(v) = 1$$ Stationary distribution condition for $$\pi$$, $$\sum_u \pi(u) P(u,v) = \pi(v)$$

## The relationship between matrix inversion, the HHL algorithm, and the unlikely scenario that $BQP = PSPACE$

I am studying the quantum computing algorithm presented in the paper Quantum algorithm for linear systems of equations}.

Without going through all the details, the HHL algorithm is able to apply an inverted matrix to a normalized vector prepared in a quantum state in time complexity $$\tilde{O}(\log(N)s^2\kappa^2/\epsilon)$$, i.e. in order to solve $$A |x \rangle = |b \rangle$$, it computes an estimate $$|x \rangle = A^{-1} | b \rangle$$ where

$$N$$ is the dimension of the matrix

$$s$$ i the sparsity of the matrix

$$\kappa$$ is the condition number of the matrix

$$\epsilon$$ is the desired error bound

In an argument for the optimality of the algorithm the authors construct a reduction from a general quantum circuit to a matrix inversion problem with a proof (page 4).

Now here is where I get confused, the authors write:

The reduction from a general quantum circuit to a matrix inversion problem also implies that our algorithm cannot be substantially improved (under standard assumptions). If the run-time could be made polylogarithmic in $$\kappa$$, then any problem solvable on $$n$$ qubits could be solved in poly(n) time (i.e. $$BQP=PSPACE$$), a highly unlikely possibility

Why does this imply that $$BQP = PSPACE$$? Any insights much appreciated.

## Find an unknown matrix from an equation

I need to find an unknown matrix, H, from this equation:

K2 = Transpose[H].K1.H.

K1 and K2 are 2×2 matrices, and H is the unknown matrix, also 2×2. I know the first element of H, H(1,1)=1.

Can I simply use of some Mathematica functions? Because I tried with Solve:

Solve[K2==Transpose[H].K1.H] 

Doesn’t work.

Specifically I have this matrices:

 K1 = {{10385.46, -306.12}, {-306.12, 225.8}};  K2 = {{10049.794, 378.59}, {378.59, 1807.16}};  H = {{1, h12}, {h21, h22}}; 

For which I have this warning:

Solve was unable to solve the system with inexact coefficients. The answer was obtained by solving a corresponding exact system and numericizing the result

## Positive semi-definite block diagonal covariance matrix with exponential decay

I am implementing Kalman filtering in R. Part of the problem involves generating a really huge error covariance block-diagonal matrix (dim: 18000 rows x 18000 columns = 324,000,000 entries). We denote this matrix Q. This Q matrix is multiplied by another huge rectangular matrix called the linear operator, denoted by H.

I am able to construct these matrices but it takes a lot of memory and hangs my computer. I am looking at ways to make my code efficient or do the matrix multiplications without actually creating the matrices exclusively.

library(lattice) library(Matrix) library(ggplot2)  nrows <- 125  ncols <- 172  p <- ncols*nrows  #--------------------------------------------------------------# # Compute Qf.OSI, the "constant" model error covariance matrix # #--------------------------------------------------------------#    Qvariance <- 1   Qrho <- 0.8    Q <- matrix(0, p, p)     for (alpha in 1:p)   {     JJ <- (alpha - 1) %% nrows + 1     II <- ((alpha - JJ)/ncols) + 1     #print(paste(II, JJ))      for (beta in alpha:p)     {       LL <- (beta - 1) %% nrows + 1       KK <- ((beta - LL)/ncols) + 1        d <- sqrt((LL - JJ)^2 + (KK - II)^2)       #print(paste(II, JJ, KK, LL, "d = ", d))        Q[alpha, beta] <-  Q[beta, alpha] <-  Qvariance*(Qrho^d)     }   }     # dn <- (det(Q))^(1/p)   # print(dn)    # Determinant of Q is 0   # Sum of the eigen values of Q is equal to p    #-------------------------------------------#   # Create a block-diagonal covariance matrix #   #-------------------------------------------#    Qf.OSI <- as.matrix(bdiag(Q,Q))    print(paste("Dimension of the forecast error covariance matrix, Qf.OSI:")); print(dim(Qf.OSI)) 

It takes a long time to create the matrix Qf.OSI at the first place. Then I am looking at pre- and post-multiplying Qf.OSI with a linear operator matrix, H, which is of dimension 48 x 18000. The resulting HQf.OSIHt is finally a 48×48 matrix. What is an efficient way to generate the Q matrix? The above form for Q matrix is one of many in the literature. In the below image you will see yet another form for Q (called the Balgovind form) which I haven’t implemented but I assume is equally time consuming to generate the matrix in R.

## Why doesn’t my code fill in the augmented matrix properly?

I’m trying to input data from a training file. It is skipping the first row entirely.

    double** augmentMatrix(double** matrix, int dim) {   double** identityMatrix = CreateIdentityMatrix(dim, dim); //creates identity matrix    double** augmentedMatrix = (double**)malloc(sizeof(double*) * dim);      for (int row = 0; row < dim; row++) {         augmentedMatrix[row] = (double*)malloc(sizeof(double) * dim * 2);      } // attributes space    for(int row = 0; row < dim; row++) {     for(int col = 0; col < dim; col++) {       augmentedMatrix[row][col] = matrix[row][col]; // sets first part of augmented matrix to the matrix parameter's points     }     for(int col = dim; col < 2 * dim; col++) {       augmentedMatrix[row][col] = identityMatrix[row][col - dim]; //separates the identity matrix       }     }   printMatrix(augmentedMatrix, dim, dim * 2);   return augmentedMatrix; } 

Here is what it is outputting:

    0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0,  26.2, 84.8, 61.9, 58900.0, 51789.8, 0.0, 1.0, 0.0, 0.0, 0.0,  6493.8, 19487.8, 12220.3, 9864262.5, 12861241.2, 0.0, 0.0, 1.0, 0.0, 0.0,  21384.0, 70762.0, 55533.2, 57206075.0, 42298973.0, 0.0, 0.0, 0.0, 1.0, 0.0,  19713.0, 61172.0, 41552.0, 39621035.0, 38865313.0, 0.0, 0.0, 0.0, 0.0, 1.0, 

Here is the training data file:

    18.0, 55.0, 37.0, 33025.0, 35598.0,  26.2, 84.8, 61.9, 58900.0, 51789.8,  6493.8, 19487.8, 12220.3, 9864262.5, 12861241.2,  21384.0, 70762.0, 55533.2, 57206075.0, 42298973.0,  19713.0, 61172.0, 41552.0, 39621035.0, 38865313.0,  

## Generate random matrix and its inverse

I want to randomly generate a pair of invertible matrices $$A,B$$ that are inverses of each other. In other words, I want to sample uniformly at random from the set of pairs $$A,B$$ of matrices such that $$AB=BA=\text{Id}$$.

Is there an efficient way to do this? Can we do it with expected running time approaching $$O(n^2)$$?

Assume we are working with $$n\times n$$ boolean matrices (all entries 0 or 1, arithmetic done modulo 2). I am fine with an approximate algorithm (say, it samples from a distribution exponentially close to the desired distribution). My motivation is solely curiousity; I have no practical application in mind.

The obvious approach is to generate a random invertible matrix $$A$$, compute its inverse, and set $$A=B^{-1}$$. This has running time $$O(n^\omega)$$, where $$\omega$$ is the matrix multiplication constant — something in the vicinity of $$O(n^3)$$ in practice. Can we do better?

An approach that occurred to me is to choose a set $$T$$ of simple linear transformations on matrices such that, for each $$t \in T$$, we can apply the modifications $$M \mapsto tM$$ and $$M \mapsto Mt^{-1}$$ in $$O(1)$$ time. Then, we could set $$A_0=B_0=\text{Id}$$, and in step $$i$$, sample a random $$t$$ from $$T$$, set $$A_{i+1}=tA_i$$ and $$B_{i+1}=B_it^{-1}$$, and repeat for some number of steps (say $$O(n^2 \log n)$$ iterations). However I’m not sure how we would prove how quickly this approaches the desired distribution.

## Grey level co-occurrence matrix features

I use the Matlab software to extract features from a set of images as a data set ,but when I calculated the matrix on two different computers the results were different even though I used the same data set and the same code , so why the results differed on the two computers?