Positive semi-definite block diagonal covariance matrix with exponential decay

I am implementing Kalman filtering in R. Part of the problem involves generating a really huge error covariance block-diagonal matrix (dim: 18000 rows x 18000 columns = 324,000,000 entries). We denote this matrix Q. This Q matrix is multiplied by another huge rectangular matrix called the linear operator, denoted by H.

I am able to construct these matrices but it takes a lot of memory and hangs my computer. I am looking at ways to make my code efficient or do the matrix multiplications without actually creating the matrices exclusively.

library(lattice) library(Matrix) library(ggplot2)  nrows <- 125  ncols <- 172  p <- ncols*nrows  #--------------------------------------------------------------# # Compute Qf.OSI, the "constant" model error covariance matrix # #--------------------------------------------------------------#    Qvariance <- 1   Qrho <- 0.8    Q <- matrix(0, p, p)     for (alpha in 1:p)   {     JJ <- (alpha - 1) %% nrows + 1     II <- ((alpha - JJ)/ncols) + 1     #print(paste(II, JJ))      for (beta in alpha:p)     {       LL <- (beta - 1) %% nrows + 1       KK <- ((beta - LL)/ncols) + 1        d <- sqrt((LL - JJ)^2 + (KK - II)^2)       #print(paste(II, JJ, KK, LL, "d = ", d))        Q[alpha, beta] <-  Q[beta, alpha] <-  Qvariance*(Qrho^d)     }   }     # dn <- (det(Q))^(1/p)   # print(dn)    # Determinant of Q is 0   # Sum of the eigen values of Q is equal to p    #-------------------------------------------#   # Create a block-diagonal covariance matrix #   #-------------------------------------------#    Qf.OSI <- as.matrix(bdiag(Q,Q))    print(paste("Dimension of the forecast error covariance matrix, Qf.OSI:")); print(dim(Qf.OSI)) 

It takes a long time to create the matrix Qf.OSI at the first place. Then I am looking at pre- and post-multiplying Qf.OSI with a linear operator matrix, H, which is of dimension 48 x 18000. The resulting HQf.OSIHt is finally a 48×48 matrix. What is an efficient way to generate the Q matrix? The above form for Q matrix is one of many in the literature. In the below image you will see yet another form for Q (called the Balgovind form) which I haven’t implemented but I assume is equally time consuming to generate the matrix in R.

Balgovind form for Q

Exponential decay of a $J$-holomorphic map on a long cylinder

Suppose $ (X,\omega,J)$ is a closed symplectic manifold with a compatible almost complex structure. The fact below follows from McDuff-Salamon’s book on $ J$ -holomorphic curves (specifically, Lemma 4.7.3).

Given $ 0<\mu< 1$ , there exist constants $ 0<C<\infty$ and $ \hbar>0$ such that the following property holds. Given a $ J$ -holomorphic map $ u:(-R-1,R+1)\times S^1\to X$ with energy $ E(u)<\hbar$ defined on an annulus (with $ R>0$ ), we have the exponential decay estimates

(1) $ E(u|_{[-R+T,R-T]\times S^1})\le C^2e^{-2\mu T}E(u)$

(2) $ \sup_{[-R+T,R-T]\times S^1}\|du\|\le Ce^{-\mu T}\sqrt{E(u)}$

for all $ 0\le T\le R$ . Here, we take $ S^1 = \mathbb R/2\pi\mathbb Z$ and use the standard flat metric on the cylinder $ \mathbb R\times S^1$ and the metric on $ X$ to measure the norm $ \|du\|$ .

Now, if $ J$ were integrable, we can actually improve this estimate further in the following way: at the expense of decreasing $ \hbar$ and increasing $ C$ , we can actually take $ \mu=1$ in (1) and (2) above. The idea would be to use (2) to deduce that $ u|_{[-R,R]\times S^1}$ actually maps into a complex coordinate neighborhood on $ X$ where we can use the Fourier expansion of $ u$ along the cylinder $ [-R,R]\times S^1$ to obtain the desired estimate.

I would like to know: is it possible to improve the estimate to $ \mu=1$ also in the case when $ J$ is not integrable? If so, a proof with some details or a reference would be appreciated. If not, what is the reason and is it possible to come up with a (counter-)example to illustrate this?