## NP-hardness does not imply lower bound, strictly speaking?

A problem is NP-hard iff every NP problem can be polynomially-time reduced to it. 

Hardness is often intuitively explained as a lower bound. But it isn’t, strictly speaking. For the sake of the argument, assume P=NP. Now the definition becomes:

A problem is P-hard iff every P problem can be polynomially-time reduced to it. 

Because the above definition uses the polynomial-time reduction, the overall running time is polynomial (reduction + solving the resulting problem), no matter how easy is the resulting problem. Hence we could get an absurd result: a problem, which runs in constant time (hence lower bound is constant), is P-hard.

The definition makes total sense if we assume that NP>P. Do people assume NP>P?

The same question arises for the definition of PSPACE-hardness, where book authors use polynomial-time reduction, rather than something strictly easier than PSPACE.

I guess the answer to this question is simply “yes”, sorry for the rant.

## How to understand out of bound in the following theoretical context?

Consider the following initialization step of loop invariant for merge procedure

Initialization: Prior to the first iteration of the loop, we have $$k=p$$, so that the subarray $$A[p .. k – 1]$$ is empty. This empty subarray contains the k- p= 0 smallest elements of $$L$$ and $$R$$, and since $$i = j = 1$$, both $$L[i]$$ and $$R[j]$$ are the smallest elements of their arrays that have not been copied back into $$A$$.

I have doubt in the above statement that if $$k=p$$, then array $$A[p..p-1]$$ is impossible and hence the further argument cannot proceed, which didn’t happen. Where am I going wrong?

## Upper bound for ratio as a function of terms on the LHS

I’ve recently been stumped over the following inequality. Suppose $$x,y \ge 0$$ and $$a > 1.$$ Do there exist constants $$c_1, c_2, c_3 \in [0,\infty)$$ which do not depend on $$x$$ or $$y$$ such that $$\frac{a + \frac{x}{2y}}{a-\frac{1}{2}} \le c_1x + c_2y + \frac{c_3}{y}$$

Thanks in advance for your help.

## How can an elemental be bound into an object or vehicle (e.g. lightning rails, elemental airships) in Eberron?

I will soon be DMing an Eberron campaign for my Dungeons & Dragons group. As part of the plot, I want a player or NPC to bind an air elemental to an elemental airship, and it may come up in the future. I recently bought the Wayfinder’s Guide to Eberron, and I used an example from the book as an NPC (a gnome artificer).

Is there a specific ritual for binding an Elemental to an object or vehicle (e.g. lightning rail or something similar), assuming they already have the elemental?

## Example of convex functions fulfilling a (strange) lower bound

I am reading a preliminary version of a paper which focuses on some minimization problems connected to a class of integral functionals. Reading the assumptions of one of the theorems I cannot convince myself that a “concrete” example exists.

Question. Does there exist a function $$f \colon \mathbb R^N \to [0,+\infty)$$ such that

1. $$f$$ is convex;
2. $$f(\lambda x) = \lambda f(x)$$ for any $$\lambda >0$$ and $$\forall x \in \mathbb R^N$$;
3. there are $$a>0, b \ge 0$$ and $$\gamma \in \mathbb R^N$$ such that $$a|x| \le f(x) + \langle \gamma, x \rangle + b$$ for any $$x \in \mathbb R^N$$?

I have some problems in finding a function satisfying the three points… It does not have to be smooth, still I do not see an example.

## upper bound of consecutive integers which are not coprime with n!

Is there any research on getting upper bound of consecutive integers which less than $$n!$$ and NOT coprime with $$n!$$?

Easy to see that lower bound $$>=n$$, (example that $$n= odd prime$$). Does upper bound $$<2n$$ strictly?

Note that the discussion is in range of $$.

## Given least upper bound $\alpha$ for $\{\ f(x) : x \in [a,b] \ \}$, $\forall \epsilon > 0 \ \exists x$ s.t. $\alpha – f(x) < \epsilon$

I can’t figure out how all of this follows. Taken from Ch.8 of Spivak’s Calculus.

If $$\alpha$$ is the least upper bound of $$\{\ f(x) : x \in [a,b] \ \}$$ then, $$\forall \epsilon > 0 \ \exists x\in [a,b] \ \ \ \ \ \ \ \alpha – f(x) < \epsilon$$ This, in turn, means that $$\frac{1}{\epsilon} < \frac{1}{\alpha – f(x)}$$

## Upper bound for the sup of the first derivative

Given $$f : \mathbb{R} \to \mathbb{R}$$ which is twice differentiable and that

$$M_0 = \sup_{x \in \mathbb{R} } |f(x)|$$

$$M_1 = \sup_{x \in \mathbb{R} } |f'(x)|$$

$$M_2 = \sup_{x \in \mathbb{R} } |f”(x)|$$

prove that $$M_1^2 \leq 2 M_0 M_2$$ ?

I think a good approach is Taylor series but for what values !

## Quantitative bound on irrational rotation recurrence time

Given an irrational $$a$$, the sequence $$b_n := na$$ is dense and equidistributed in $$\mathbb S^1$$ where we view $$\mathbb S^1$$ as $$[0, 1]$$ with its endpoints identified.

Given a point $$p$$ in $$\mathbb S^1$$, can we obtain a quantitative upper bound (that can depend on $$a, p, e$$) on the smallest $$n$$ such that $$na$$ is in $$B_e (p)$$?

## Proof of a lower bound of the recurrence relation (the CLRS’s 4.6-2 exercise)

I am trying to find a solution to the ex. 4.6-2 of the Introduction to Algorithms by Cormen, Leiserson, Rivest, Stein (the third edition). It requires, for recurrence relations $$T(n)=aT(n/b)+f(n)$$ where $$a\geq 1, b > 1$$, $$n$$ is an exact power of $$b$$ and $$f(n)$$ is an asymptotically positive function, to prove that if $$f(n) = \Theta(n^{\log_ba}lg^{k}n)$$, where $$k\geq0$$, then $$T(n)=\Theta(n^{\log_ba}lg^{k+1}n)$$.

Since $$T(n) = n^{\log_ba} + g(n)$$ where $$g(n) = \sum_{j=0}^{\log_b n – 1} a^{j}f(n/b^{j})$$, I decided to consider the $$g(n)$$ function at the first. And I have shown $$g(n) = O(n^{\log_ba}lg^{k+1}n)$$ already (I think so).

But the doing the proof $$g(n) = \Omega(n^{\log_ba}lg^{k+1}n)$$ became a challenge for me.

Below is my research (with the simplified assumption $$k$$ is an integer). By condtition, there is such constant $$c$$ that

$$g(n) \geq$$

$$c \sum_{j=0}^{\log_b n – 1} a^{j}(n/b^{j})^{log_ba}log^{k}(n/b^{j}) =$$

$$cn^{\log_ba}\sum_{j=0}^{\log_b n – 1} log^{k}(n/b^{j}) =$$

$$cn^{\log_ba}\sum_{j=0}^{\log_b n – 1}(logn – logb^{j})^{k} =$$

$$cn^{\log_ba}\sum_{j=0}^{\log_b n – 1}\sum_{i=0}^{k} {k \choose i}log^{k-i}n(-logb^{j})^{i} =$$

$$cn^{\log_ba}log^{k}n\sum_{j=0}^{\log_b n – 1}\sum_{i=0}^{k} {k \choose i}(-logb^{j}/logn)^{i} =$$

$$cn^{\log_ba}log^{k}n \biggl(log_bn + \sum_{j=0}^{\log_b n – 1}\sum_{i=1}^{k} {k \choose i}(-logb^{j}/logn)^{i} \biggr) \geq$$

$$c’n^{\log_ba}log^{k+1}n – cn^{\log_ba}log^{k}n\sum_{j=0}^{\log_b n – 1}\sum_{i=1}^{k} {k \choose i}(logb^{j}/logn)^{i} =$$

$$A(n) – B(n) = \Theta(n^{\log_ba}lg^{k+1}n) – B(n)$$

Actually I am stuck with it. I can not show that $$B(n)$$ grows slower than $$A(n)$$. For instance, since $$(logb^{j}/logn)^{i} \lt 1$$ we are able to enhance our $$\geq$$ condition by the substitution $$B(n)$$ to some fucntion $$B'(n)$$ with the sums of binominal coefficients only. But then finally $$B'(n)$$ has $$n^{\log_ba}log^{k+1}n$$.

So how to prove $$g(n) = \Omega(n^{\log_ba}lg^{k+1}n)$$ ?