Exponential Deconvolution Using the First Derivative

There is an interesting observation using the first derivative to deconvolve an exponentially modified Gaussian:

The animation is here, https://terpconnect.umd.edu/~toh/spectrum/SymmetricalizationAnimation.gif

The main idea is that if we have an Exponentially Modified Gaussian (EMG) function, and we add a small fraction of first derivative to the original EMG, it results in recovering the original Gaussian while preserving the original area. The constant multiplier is the 1/time constant of the EMG. This is a very useful property.

Has anyone seen this deconvoluting property of the first derivative mentioned elsewhere in mathematical literature? An early reference from the 1960s from a Chemistry paper shows a picture a similar picture. This observation was just by chance, I am looking for a fundamental connection and if the first derivative can be used to deconvolute other types of convolutions besides the exponential ones.

Thanks. Sharpening by using first derivative

Ref: J. W., and Charles N. Reilley. “De-tailing and sharpening of response peaks in gas chromatography.” Analytical Chemistry 37, (1965), 626-630.

Formula for exponential integral over a cone

While reading ‘Computing the Volume, Counting Integral points, and Exponential Sums’ by A. Barvinok (1993), I came across the following:

“Moreover, let $ K$ be the conic hull of linearly independent vectors $ u_{1}, …, u_{n}$ so $ K = co(u_{1}, …, u_{n})$ . Denote by $ K^{*} = \{x\in \mathbb{R}^{n}: <x, y> \le 0 \text{ for all } y\in K\}$ the polar of K. Then for all $ c \in \text{Int} K^{*}$ we have

\begin{equation} \int_{K}exp(<c, x>)dx = |u_{1} \land … \land u_{n}|\prod_{i=1}^{n}<-c_{i}, u_{i}>^{-1} \end{equation}

To obtain the last formula we have to apply a suitable linear transformation to the previous integral. “

I have tried proving this but I can’t find relevant links to help me. Also, I’m unsure what $ |u_{1} \land … \land u_{n}|$ is supposed to mean. Would greatly appreciate if someone could point me to a relevant resource or provide proof. Thanks!

Exponential decay of a $J$-holomorphic map on a long cylinder

Suppose $ (X,\omega,J)$ is a closed symplectic manifold with a compatible almost complex structure. The fact below follows from McDuff-Salamon’s book on $ J$ -holomorphic curves (specifically, Lemma 4.7.3).

Given $ 0<\mu< 1$ , there exist constants $ 0<C<\infty$ and $ \hbar>0$ such that the following property holds. Given a $ J$ -holomorphic map $ u:(-R-1,R+1)\times S^1\to X$ with energy $ E(u)<\hbar$ defined on an annulus (with $ R>0$ ), we have the exponential decay estimates

(1) $ E(u|_{[-R+T,R-T]\times S^1})\le C^2e^{-2\mu T}E(u)$

(2) $ \sup_{[-R+T,R-T]\times S^1}\|du\|\le Ce^{-\mu T}\sqrt{E(u)}$

for all $ 0\le T\le R$ . Here, we take $ S^1 = \mathbb R/2\pi\mathbb Z$ and use the standard flat metric on the cylinder $ \mathbb R\times S^1$ and the metric on $ X$ to measure the norm $ \|du\|$ .

Now, if $ J$ were integrable, we can actually improve this estimate further in the following way: at the expense of decreasing $ \hbar$ and increasing $ C$ , we can actually take $ \mu=1$ in (1) and (2) above. The idea would be to use (2) to deduce that $ u|_{[-R,R]\times S^1}$ actually maps into a complex coordinate neighborhood on $ X$ where we can use the Fourier expansion of $ u$ along the cylinder $ [-R,R]\times S^1$ to obtain the desired estimate.

I would like to know: is it possible to improve the estimate to $ \mu=1$ also in the case when $ J$ is not integrable? If so, a proof with some details or a reference would be appreciated. If not, what is the reason and is it possible to come up with a (counter-)example to illustrate this?

Can any NP-Complete Problem be solved using at most polynomial space (but while using exponential time?)

I read about NPC and its relationship to PSPACE and I wish to know whether NPC problems can be deterministicly solved using an algorithm with worst case polynomial space requirement, but potentially taking exponential time (2^P(n) where P is polynomial).

Moreover, can it be generalised to EXPTIME in general?

The reason I am asking this is that I wrote some programs to solve degenerate cases of an NPC problem, and they can consume very large amounts of RAM for hard instances, and I wonder if there is a better way. For reference see https://fc-solve.shlomifish.org/faq.html .

Exponential search worst-case performance

I’m learning about Exponential search and I keep reading that the worst-case performance is $ log_2(i)$ where $ i$ is the searched index.

I tried with an array containing $ 1073741824$ elements $ (1024*1024*1024)$ and I searched for the $ 1073741823$ th element (which is just $ n-1$ ).

Therefore, I used the exponential search to get the range containing $ i$ (which was $ [536870912 ; 1073741824]$ and it took $ log_2(n)$ to find. Then it took another $ log_2(n/2)$ to find the right index using Binary search in this range only.

Am I doing something wrong? Or is the complexity of the worst-case $ 2*log_2(n)-1$ ?

Is the Time Complexity of Trial Division Exponential?

I know that there is already a similar question asked on here, but after reading the wiki page on trial division, I am confused, and the other answer doesn’t help. The wiki page states that when doing trial division, “If a variant is used without primality testing, but simply dividing by every odd number less than the square root the base-2 n digit number a, prime or not, the algorithm can take up to about 2^(n/2) time”.

Using such a variant, however, why wouldn’t the time complexity be O(sqrt(n)) in the worst case? In the worst case you check all the numbers up to sqrt(n). So in the worst case, the algorithm would depend on the sqrt of the number of natural numbers before n, no? Also, would a variant that divides a number n by every natural number less than n until a factor is found in order to determine if n is prime is used be linear in time?

Link to the wiki I’m talking about: https://en.wikipedia.org/wiki/Trial_division (Quote pulled from the section titled, “Speed”)

Side Question: Is this post better suited here as a new post, or should I have posted this question as a follow up to the other, similar question on trial division that was posted 4 years ago on here? I’m new here so I’m not sure what’s preferred.

Does $\beta(n,n)$ belong to curved exponential family?

Does the Beta distribution having parameters $ n,n$ where $ n>0$ belong to the one parameter exponential family?

If we write down the pdf, it seems easy that indeed it belongs to the OPEF. But I came to know recently that belongs to the curved exponential family as the parameter space does not describe an open set in $ \mathbb{R^2}$ .

Can anyone enlighten me about that?

Real root isolation for exponential polynomials

Suppose we are given an exponential polynomial $ f:\mathbb{R}\mapsto\mathbb{R}$ $ $ f(t)=\sum_{i=1}^n p_i(t)e^{\lambda_i t} $ $ where $ p_i(t)$ s are polynomials with algebraic coefficients and $ \lambda_i$ s are algebraic.

Given interval $ (a,b)$ , is there an algorithm to output finitely many disjoint intervals, such that each contains exactly one positive root of $ f$ in $ (a,b)$ , and together contain all.

Multidimensional Fourier transform: inner product in exponential?

I have a basic and not very deep understanding of continuous and discrete Fourier transforms in one dimension.

In a multidimensional Fourier transform, the exponent of $ e$ includes the inner product of the arguments of the original function and the transformed function. I’m having trouble finding an explanation of why this is the right way to extend the idea of a Fourier transform to more than one dimension. What is the intuition, or perhaps a more formal reason for this idea? In one dimension, the original function’s and transformed function’s arguments are multiplied, and using an inner product in a multidimensional case is one natural extension of that idea, but that’s not enough to justify it.

I see that using the inner product in the exponential means that the integral or sum (for a discrete Fourier transform) is over a product of $ \cos x_i + i \sin x_i$ sums, but I am not sure why that makes sense, or even whether that’s a useful way to think about it. (I can picture waves in two real dimensions, and multiplying one-dimensional wave equations kind of feels like a good way to represent that, but I still don’t have a clear understanding.)

Pointers to texts as well as explanations here would be welcome.