Plotting Integral of Exponential functions

I am trying to Plot an integral equation that involves exponential function. My code is as follow,

L[\[Alpha]_] :=    NIntegrate[    1/(k + I*0.1) (     Exp[I*k*x] (Exp[Sqrt[k^2 + \[Alpha]/w^2]*w] - 1) (Exp[k*w] - 1 +         I*0.1) Sqrt[      k^2 + \[Alpha]/       w^2])/((Sqrt[k^2 + \[Alpha]/w^2] + k) (Exp[         Sqrt[k^2 + \[Alpha]/w^2]*w - Exp[k*w]]) + (Sqrt[         k^2 + \[Alpha]/w^2] -          k) (Exp[(k + Sqrt[k^2 + \[Alpha]/w^2]) w] -          1)), {k, -100, 100}]; Plot[{Re[L[10]], Re[L[100]], Re[L[500]]}, {x, -0.45, 0.45},   PlotRange -> Full].  

But this integral gives a lot of oscillations which it should not. This is fig 2 in this article “” that I am trying to plot. Any help will be highly appreciated.

TQBF PSPACE-COMPLETE : Why this algorithm is exponential but Savitch’s not?

So this is a question pertaining to the proof for $ PSPACE-COMPLETE$ (for TQBF for example). The idea is to first prove the $ L$ $ is$ $ PSPACE$ (easy part) and next is to prove $ PSPACE-COMPLETE$ . The latter requires demonstrating an algorithm which computes the L in polynomial space. This is usually achieved by having recursive calls such that is re-used.

In TBQF proof, the equation $ \phi_{i+1}(A,B)$ = $ \exists Z [\phi_{i+1}(A,Z) \land \phi_{i+1}(Z,B) ]$ ($ Z$ is mid-point )is default recursive relation for computing TBQF truth. In any standard proof, it is said that $ \phi_{i+1}(A,B)$ is computed two times and for $ m$ nodes, this formula explodes hence, other recursive-relation should be used to bound.

However in Savitch’s proof, the recursive relation is $ Path(a,b,t)$ = $ Path(a,mid,t-1)$ AND $ Path(mid,b,t-1)$ accepts then ACCEPT. In proof, it is stated that this relation reuses-spaces.

My Question is Why in TBQF relation space explodes while in Path, it is reused? Both of these relations looks more or less same to me because both refers to i-1 instances and will need space to store them?.

Positive semi-definite block diagonal covariance matrix with exponential decay

I am implementing Kalman filtering in R. Part of the problem involves generating a really huge error covariance block-diagonal matrix (dim: 18000 rows x 18000 columns = 324,000,000 entries). We denote this matrix Q. This Q matrix is multiplied by another huge rectangular matrix called the linear operator, denoted by H.

I am able to construct these matrices but it takes a lot of memory and hangs my computer. I am looking at ways to make my code efficient or do the matrix multiplications without actually creating the matrices exclusively.

library(lattice) library(Matrix) library(ggplot2)  nrows <- 125  ncols <- 172  p <- ncols*nrows  #--------------------------------------------------------------# # Compute Qf.OSI, the "constant" model error covariance matrix # #--------------------------------------------------------------#    Qvariance <- 1   Qrho <- 0.8    Q <- matrix(0, p, p)     for (alpha in 1:p)   {     JJ <- (alpha - 1) %% nrows + 1     II <- ((alpha - JJ)/ncols) + 1     #print(paste(II, JJ))      for (beta in alpha:p)     {       LL <- (beta - 1) %% nrows + 1       KK <- ((beta - LL)/ncols) + 1        d <- sqrt((LL - JJ)^2 + (KK - II)^2)       #print(paste(II, JJ, KK, LL, "d = ", d))        Q[alpha, beta] <-  Q[beta, alpha] <-  Qvariance*(Qrho^d)     }   }     # dn <- (det(Q))^(1/p)   # print(dn)    # Determinant of Q is 0   # Sum of the eigen values of Q is equal to p    #-------------------------------------------#   # Create a block-diagonal covariance matrix #   #-------------------------------------------#    Qf.OSI <- as.matrix(bdiag(Q,Q))    print(paste("Dimension of the forecast error covariance matrix, Qf.OSI:")); print(dim(Qf.OSI)) 

It takes a long time to create the matrix Qf.OSI at the first place. Then I am looking at pre- and post-multiplying Qf.OSI with a linear operator matrix, H, which is of dimension 48 x 18000. The resulting HQf.OSIHt is finally a 48×48 matrix. What is an efficient way to generate the Q matrix? The above form for Q matrix is one of many in the literature. In the below image you will see yet another form for Q (called the Balgovind form) which I haven’t implemented but I assume is equally time consuming to generate the matrix in R.

Balgovind form for Q

Exponential Deconvolution Using the First Derivative

There is an interesting observation using the first derivative to deconvolve an exponentially modified Gaussian:

The animation is here,

The main idea is that if we have an Exponentially Modified Gaussian (EMG) function, and we add a small fraction of first derivative to the original EMG, it results in recovering the original Gaussian while preserving the original area. The constant multiplier is the 1/time constant of the EMG. This is a very useful property.

Has anyone seen this deconvoluting property of the first derivative mentioned elsewhere in mathematical literature? An early reference from the 1960s from a Chemistry paper shows a picture a similar picture. This observation was just by chance, I am looking for a fundamental connection and if the first derivative can be used to deconvolute other types of convolutions besides the exponential ones.

Thanks. Sharpening by using first derivative

Ref: J. W., and Charles N. Reilley. “De-tailing and sharpening of response peaks in gas chromatography.” Analytical Chemistry 37, (1965), 626-630.

Formula for exponential integral over a cone

While reading ‘Computing the Volume, Counting Integral points, and Exponential Sums’ by A. Barvinok (1993), I came across the following:

“Moreover, let $ K$ be the conic hull of linearly independent vectors $ u_{1}, …, u_{n}$ so $ K = co(u_{1}, …, u_{n})$ . Denote by $ K^{*} = \{x\in \mathbb{R}^{n}: <x, y> \le 0 \text{ for all } y\in K\}$ the polar of K. Then for all $ c \in \text{Int} K^{*}$ we have

\begin{equation} \int_{K}exp(<c, x>)dx = |u_{1} \land … \land u_{n}|\prod_{i=1}^{n}<-c_{i}, u_{i}>^{-1} \end{equation}

To obtain the last formula we have to apply a suitable linear transformation to the previous integral. “

I have tried proving this but I can’t find relevant links to help me. Also, I’m unsure what $ |u_{1} \land … \land u_{n}|$ is supposed to mean. Would greatly appreciate if someone could point me to a relevant resource or provide proof. Thanks!

Exponential decay of a $J$-holomorphic map on a long cylinder

Suppose $ (X,\omega,J)$ is a closed symplectic manifold with a compatible almost complex structure. The fact below follows from McDuff-Salamon’s book on $ J$ -holomorphic curves (specifically, Lemma 4.7.3).

Given $ 0<\mu< 1$ , there exist constants $ 0<C<\infty$ and $ \hbar>0$ such that the following property holds. Given a $ J$ -holomorphic map $ u:(-R-1,R+1)\times S^1\to X$ with energy $ E(u)<\hbar$ defined on an annulus (with $ R>0$ ), we have the exponential decay estimates

(1) $ E(u|_{[-R+T,R-T]\times S^1})\le C^2e^{-2\mu T}E(u)$

(2) $ \sup_{[-R+T,R-T]\times S^1}\|du\|\le Ce^{-\mu T}\sqrt{E(u)}$

for all $ 0\le T\le R$ . Here, we take $ S^1 = \mathbb R/2\pi\mathbb Z$ and use the standard flat metric on the cylinder $ \mathbb R\times S^1$ and the metric on $ X$ to measure the norm $ \|du\|$ .

Now, if $ J$ were integrable, we can actually improve this estimate further in the following way: at the expense of decreasing $ \hbar$ and increasing $ C$ , we can actually take $ \mu=1$ in (1) and (2) above. The idea would be to use (2) to deduce that $ u|_{[-R,R]\times S^1}$ actually maps into a complex coordinate neighborhood on $ X$ where we can use the Fourier expansion of $ u$ along the cylinder $ [-R,R]\times S^1$ to obtain the desired estimate.

I would like to know: is it possible to improve the estimate to $ \mu=1$ also in the case when $ J$ is not integrable? If so, a proof with some details or a reference would be appreciated. If not, what is the reason and is it possible to come up with a (counter-)example to illustrate this?

Can any NP-Complete Problem be solved using at most polynomial space (but while using exponential time?)

I read about NPC and its relationship to PSPACE and I wish to know whether NPC problems can be deterministicly solved using an algorithm with worst case polynomial space requirement, but potentially taking exponential time (2^P(n) where P is polynomial).

Moreover, can it be generalised to EXPTIME in general?

The reason I am asking this is that I wrote some programs to solve degenerate cases of an NPC problem, and they can consume very large amounts of RAM for hard instances, and I wonder if there is a better way. For reference see .

Exponential search worst-case performance

I’m learning about Exponential search and I keep reading that the worst-case performance is $ log_2(i)$ where $ i$ is the searched index.

I tried with an array containing $ 1073741824$ elements $ (1024*1024*1024)$ and I searched for the $ 1073741823$ th element (which is just $ n-1$ ).

Therefore, I used the exponential search to get the range containing $ i$ (which was $ [536870912 ; 1073741824]$ and it took $ log_2(n)$ to find. Then it took another $ log_2(n/2)$ to find the right index using Binary search in this range only.

Am I doing something wrong? Or is the complexity of the worst-case $ 2*log_2(n)-1$ ?