## Do exponential functions grow faster than logarithmic?

For example:

$$f(n) = n(log_2 n)^4$$ vs $$g(n) = n^{1/3}$$

Does f(n) grow slower than g(n) as a general rule, even for decimal exponents? I tried doing a limit test but symbolab limit calculator is saying that Steps are currently not supported for this problem.

I graphed them over a large range and it does appear that g(n) grows asymptotically faster.

## Neural branch predictors linear, classical predictors exponential, in resources?

Wikipedia states:

The main advantage of the neural predictor is its ability to exploit long histories while requiring only linear resource growth. Classical predictors require exponential resource growth.

What is the reason for this?

## Is O(n log n) exponential speedup over O(n^2)?

I would like to know if O(n log n) is an exponential speedup over O(n^2)

## What is the difference between super-polynomial time and exponential time?

What is the difference between super-polynomial time and exponential time? Any differences?

## How can I calculate the exponential integral?

(I’m not sure this is the right forum.)

I’m writing a program that uses the prime-counting function. Right now, I’m using x/log(x), but I want to switch to something more accurate. A better approximation is the logarithmic integral function (actually, its Eulerian variant), which can be computed from the exponential integral. Now how can I compute the exponential integral? I’m on a macOS Intel system using Swift, so I can use the various advanced floating-point functions provided by Apple’s system libraries if needed to help.

## Does the naive conversion of a Boolean Formula to CNF have a polynomial or exponential complexity?

I am reading the naive conversion to CNF, this procedure is explaining in this book book, but I have not found a conplexity analysis of this algorithm:

1. elimination of equivalence
2. Elimination of Implications
3. elimination of double negation
4. De Morgan Laws
5. distributive law

I found one implementation of this method in this Repo https://github.com/netom/satispy

Thanks

## Expected value of next CPU burst using exponential averaging

The burst time is needed for Shortest Job First (SJF) and Shortest Run Time First (SRTF) scheduling. To get the approximate burst time, we use the equation $$\tau_{n + 1} = t_n + (1 – \alpha)\tau_n$$

I want to ask whether $$\tau_{n + 1}$$ is the predicted burst time of the $$n + 1$$ th process or it is the predicted burst time of some process say p which is demanding the CPU for the $$n + 1$$ th time.

## Collect ignoring negative exponential

I have the expression 1 + Exp[-2 x] (-1 – 2 x (1 + x)) i use expand to get the x’s to multiply within the parentheses, this gives 1-Exp[-2x]-2Exp[-2x]x-2Exp[-2x]x^2. I’m expected to get this into the form; 1-Exp[-2 x]*(1 + 2 x + 2 x^2) I would expect to use Collect[Expression, -Exp[-2x]] to do this but it only returns the expanded form again.

## Plotting Integral of Exponential functions

I am trying to Plot an integral equation that involves exponential function. My code is as follow,

L[\[Alpha]_] :=    NIntegrate[    1/(k + I*0.1) (     Exp[I*k*x] (Exp[Sqrt[k^2 + \[Alpha]/w^2]*w] - 1) (Exp[k*w] - 1 +         I*0.1) Sqrt[      k^2 + \[Alpha]/       w^2])/((Sqrt[k^2 + \[Alpha]/w^2] + k) (Exp[         Sqrt[k^2 + \[Alpha]/w^2]*w - Exp[k*w]]) + (Sqrt[         k^2 + \[Alpha]/w^2] -          k) (Exp[(k + Sqrt[k^2 + \[Alpha]/w^2]) w] -          1)), {k, -100, 100}]; Plot[{Re[L[10]], Re[L[100]], Re[L[500]]}, {x, -0.45, 0.45},   PlotRange -> Full].  

But this integral gives a lot of oscillations which it should not. This is fig 2 in this article “https://arxiv.org/pdf/1508.00836.pdf” that I am trying to plot. Any help will be highly appreciated.

## TQBF PSPACE-COMPLETE : Why this algorithm is exponential but Savitch’s not?

So this is a question pertaining to the proof for $$PSPACE-COMPLETE$$ (for TQBF for example). The idea is to first prove the $$L$$ $$is$$ $$PSPACE$$(easy part) and next is to prove $$PSPACE-COMPLETE$$. The latter requires demonstrating an algorithm which computes the L in polynomial space. This is usually achieved by having recursive calls such that is re-used.

In TBQF proof, the equation $$\phi_{i+1}(A,B)$$= $$\exists Z [\phi_{i+1}(A,Z) \land \phi_{i+1}(Z,B) ]$$ ($$Z$$ is mid-point )is default recursive relation for computing TBQF truth. In any standard proof, it is said that $$\phi_{i+1}(A,B)$$ is computed two times and for $$m$$ nodes, this formula explodes hence, other recursive-relation should be used to bound.

However in Savitch’s proof, the recursive relation is $$Path(a,b,t)$$ = $$Path(a,mid,t-1)$$ AND $$Path(mid,b,t-1)$$ accepts then ACCEPT. In proof, it is stated that this relation reuses-spaces.

My Question is Why in TBQF relation space explodes while in Path, it is reused? Both of these relations looks more or less same to me because both refers to i-1 instances and will need space to store them?.