## This integration takes forever in Mathematica. Does it mean that it is not solvable?

I have a quite complicated function f(x,y) as follows.

f(x,y) = -(2/(27 \[Pi]^2))*(-2 I + I Coth[1/2 (Log + Log[2 Sin[x]^2 + 2 Sin[y]^2 - Sin[x + y]^2] + Log[-Sin[x]^2 + 2 (Sin[y]^2 + Sin[x + y]^2)] - 2 Log[1/2 (3 + Cos[2 x] - 5 Cos[2 y] + Cos[2 (x + y)] + 3 I Sin[2 x] + 3 I Sin[2 y] - 3 I Sin[2 (x + y)])])] - I Coth[1/2 (-Log - Log[2 Sin[x]^2 - Sin[y]^2 + 2 Sin[x + y]^2] - Log[-Sin[x]^2 + 2 (Sin[y]^2 + Sin[x + y]^2)] + 2 Log[1/2 (3 + Cos[2 x] + Cos[2 y] - 5 Cos[2 (x + y)] - 3 I Sin[2 x] - 3 I Sin[2 y] + 3 I Sin[2 (x + y)])])]) (2 I + I Coth[1/2 (Log + Log[2 Sin[x]^2 + 2 Sin[y]^2 - Sin[x + y]^2] + Log[-Sin[x]^2 + 2 (Sin[y]^2 + Sin[x + y]^2)] - 2 Log[1/2 (3 + Cos[2 x] - 5 Cos[2 y] + Cos[2 (x + y)] + 3 I Sin[2 x] + 3 I Sin[2 y] - 3 I Sin[2 (x + y)])])] - I Coth[1/2 (-Log - Log[2 Sin[x]^2 - Sin[y]^2 + 2 Sin[x + y]^2] - Log[-Sin[x]^2 + 2 (Sin[y]^2 + Sin[x + y]^2)] + 2 Log[1/2 (3 + Cos[2 x] + Cos[2 y] - 5 Cos[2 (x + y)] - 3 I Sin[2 x] - 3 I Sin[2 y] + 3 I Sin[2 (x + y)])])]) (-I - 2 I Coth[1/2 (Log + Log[2 Sin[x]^2 + 2 Sin[y]^2 - Sin[x + y]^2] + Log[-Sin[x]^2 + 2 (Sin[y]^2 + Sin[x + y]^2)] - 2 Log[1/2 (3 + Cos[2 x] - 5 Cos[2 y] + Cos[2 (x + y)] + 3 I Sin[2 x] + 3 I Sin[2 y] - 3 I Sin[2 (x + y)])])] - I Coth[1/2 (-Log - Log[2 Sin[x]^2 - Sin[y]^2 + 2 Sin[x + y]^2] - Log[-Sin[x]^2 + 2 (Sin[y]^2 + Sin[x + y]^2)] + 2 Log[1/2 (3 + Cos[2 x] + Cos[2 y] - 5 Cos[2 (x + y)] - 3 I Sin[2 x] - 3 I Sin[2 y] + 3 I Sin[2 (x + y)])])]) (I - 2 I Coth[1/2 (Log + Log[2 Sin[x]^2 + 2 Sin[y]^2 - Sin[x + y]^2] + Log[-Sin[x]^2 + 2 (Sin[y]^2 + Sin[x + y]^2)] - 2 Log[1/2 (3 + Cos[2 x] - 5 Cos[2 y] + Cos[2 (x + y)] + 3 I Sin[2 x] + 3 I Sin[2 y] - 3 I Sin[2 (x + y)])])] - I Coth[1/2 (-Log - Log[2 Sin[x]^2 - Sin[y]^2 + 2 Sin[x + y]^2] - Log[-Sin[x]^2 + 2 (Sin[y]^2 + Sin[x + y]^2)] + 2 Log[1/2 (3 + Cos[2 x] + Cos[2 y] - 5 Cos[2 (x + y)] - 3 I Sin[2 x] - 3 I Sin[2 y] + 3 I Sin[2 (x + y)])])]) (-I - I Coth[1/2 (Log + Log[2 Sin[x]^2 + 2 Sin[y]^2 - Sin[x + y]^2] + Log[-Sin[x]^2 + 2 (Sin[y]^2 + Sin[x + y]^2)] - 2 Log[1/2 (3 + Cos[2 x] - 5 Cos[2 y] + Cos[2 (x + y)] + 3 I Sin[2 x] + 3 I Sin[2 y] - 3 I Sin[2 (x + y)])])] - 2 I Coth[ 1/2 (-Log - Log[2 Sin[x]^2 - Sin[y]^2 + 2 Sin[x + y]^2] - Log[-Sin[x]^2 + 2 (Sin[y]^2 + Sin[x + y]^2)] + 2 Log[1/2 (3 + Cos[2 x] + Cos[2 y] - 5 Cos[2 (x + y)] - 3 I Sin[2 x] - 3 I Sin[2 y] + 3 I Sin[2 (x + y)])])]) (I - I Coth[1/2 (Log + Log[2 Sin[x]^2 + 2 Sin[y]^2 - Sin[x + y]^2] + Log[-Sin[x]^2 + 2 (Sin[y]^2 + Sin[x + y]^2)] - 2 Log[1/2 (3 + Cos[2 x] - 5 Cos[2 y] + Cos[2 (x + y)] + 3 I Sin[2 x] + 3 I Sin[2 y] - 3 I Sin[2 (x + y)])])] - 2 I Coth[1/2 (-Log - Log[2 Sin[x]^2 - Sin[y]^2 + 2 Sin[x + y]^2] - Log[-[x]^2 + 2 (Sin[y]^2 + Sin[x + y]^2)] + 2 Log[1/2 (3 + Cos[2 x] + Cos[2 y] - 5 Cos[2 (x + y)] - 3 I Sin[2 x] - 3 I Sin[2 y] + 3 I Sin[2 (x + y)])])]) Csch[1/2 (Log + Log[2 Sin[x]^2 + 2 Sin[y]^2 - Sin[x + y]^2] + Log[-Sin[x]^2 + 2 (Sin[y]^2 + Sin[x + y]^2)] - 2 Log[1/2 (3 + Cos[2 x] - 5 Cos[2 y] + Cos[2 (x + y)] + 3 I Sin[2 x] + 3 I Sin[2 y] - 3 I Sin[2 (x + y)])]) + 1/2 (-Log - Log[2 Sin[x]^2 - Sin[y]^2 + 2 Sin[x + y]^2] - Log[-Sin[x]^2 + 2 (Sin[y]^2 + Sin[x + y]^2)] + 2 Log[1/2 (3 + Cos[2 x] + Cos[2 y] - 5 Cos[2 (x + y)] - 3 I Sin[2 x] - 3 I Sin[2 y] + 3 I Sin[2 (x + y)])])]^2 Sinh[1/2 (Log + Log[2 Sin[x]^2 + 2 Sin[y]^2 - Sin[x + y]^2] + Log[-Sin[x]^2 + 2 (Sin[y]^2 + Sin[x + y]^2)] - 2 Log[1/2 (3 + Cos[2 x] - 5 Cos[2 y] + Cos[2 (x + y)] + 3 I Sin[2 x] + 3 I Sin[2 y] - 3 I Sin[2 (x + y)])])]^4 Sinh[1/2 (-Log - Log[2 Sin[x]^2 - Sin[y]^2 + 2 Sin[x + y]^2] - Log[-Sin[x]^2 + 2 (Sin[y]^2 + Sin[x + y]^2)] + 2 Log[1/2 (3 + Cos[2 x] + Cos[2 y] - 5 Cos[2 (x + y)] - 3 I Sin[2 x] - 3 I Sin[2 y] + 3 I Sin[2 (x + y)])])]^4

There is a constraint x + y = Pi, and x and y are both positive real. I want to integrate f(x,y) over y from y = 0 to y = Pi - x in order to obtain f(x). However, the integration seems to take forever. Does it mean that it is not solvable in Mathematica? Or am I doing something wrong here? In general, is there anyway to check the progress of a long taking integration in Mathematica?

## If anything can be verified efficiently, must it be solvable efficiently on a Non-Deterministic machine?

Suppose, I wanted to verify the solution to $$2$$^$$3$$. Which is $$8$$.

The $$powers~of~2$$ have only one 1-bit at the start of the binary-string.

## Verify Solution Efficently

n = 8 N = 3 IF only ONE 1-bit at start of binary-string:   IF total_0-bits == N:    if n is a power_of_2:      OUTPUT solution verified, 2^3 == 8 

A solution will always be approximately $$2$$^$$N$$ digits. Its not possible for even a non-deterministic machine to arrive to a solution with $$2$$^$$N$$ digits faster than $$2$$^$$N$$ time.

Question

Can this problem be solved efficently in non-deterministic poly-time? Why not if the solutions can be verified efficently?

Posted on Categories proxies

## Recurrence relation (not solvable by the master theorem)

Consider the following recursion: $$\begin{cases} T(n) = 2T(\frac{n}{2}) + \frac{n}{\log n} &n > 1 \ O(1) &n = 1 \end{cases}$$.

The master theorem doesn’t work, as the exponent of $$\log n$$ is negative. So I tried unfolding the relation and finally got the equation: $$T(n) = n[1 + \frac{1}{\log(\frac{n}{2})} + \frac{1}{\log(\frac{n}{4})} + … + \frac{1}{\log(2)}]$$.

I do not know how to simplify (inequalities to use???) from here. A trivial method would be to assume that all reciprocal of the log terms are $$< \frac{1}{\log(2)}$$, and since there are $$\log n$$ terms, the summation of all the reciprocal-log terms is $$< \frac{\log n }{\log(2)} = \log_2 n$$, which gives $$T(n) = O(n \log n)$$. However this is a very poor approximation, as by the master theorem we can check that the time complexity for the recursive relation $$T(n) = 2T(\frac{n}{2}) + n$$ is $$O(n \log n)$$. Can someone find a tighter correct upper bound?

Posted on Categories proxies

## undecidable problems solvable for humans? [duplicate]

• Human computing power: Can humans decide the halting problem on Turing Machines? 10 answers

are undecidable problems also unsolvable for humans? I mean I would think I could tell by reading the code of a program if it will halt for a certain input (which would solve the haltingproblem).

This will be very hard for large programs of course, but still solvable. But then this would mean, a Turing machine can’t get as smart as a human.

This is probably a dumb question, but I don’t find anything online.

Posted on Categories proxies

## Is the calculation of infinite sums solvable by a computer?

The question is: I give the computer a sum, such as $$\sum_{n=1}^\infty\frac{1}{n^3}$$, the computer is expected to return an elegant closed-form solution, because the answer may be irrational. Has this problem been solved using a computer? Or, has it been proved to be undecidable? Or is it open?

Posted on Categories proxies

## Is the halting problem solvable for NPDAs?

After the total silence in response to my last question, I am rethinking my assumptions. DPDAs are, of course, solvable, and I believe that their loops can be found in the manner I described in my prior question:

1. arrive at the same state as you were in previously
2. with the same top symbol as you had last time
3. without consuming anything on the stack, and
4. without consuming any input.

But is my last question actually unanswerable because we cannot, in fact, determine whether a NPDA will halt? Is the halting problem even solvable for NPDAs?

## Solvable Lie algebra application

I am starting to study Lie algebras and when I reached the notion of solvable Lie algebra, I tryed to find concrete applications ( in physics for exemple) and I couldn’t find one. Solvable group are very important for the unsolvability of quintic equation ( and by the way, it’s the only application I know of them).

In the same manner, can we find application for solvable lie algebras ?

## Inverse Laplacian and convolution in Albeverio’s “Solvable Models in quantum mechanics”

I asked this question on math.stackexchange.com two weeks ago but got no answers so far and I got no clues from literature, so maybe someone here knows a reference. I hope it is ok to ask this question in this forum again.

I’m currently studying S. Albeverio’s book “Solvable models in quantum mechanics” where one technical things is used that I don’t fully understand. I will introduce the setting first:

General setting:

Looking at the Hamiltonian $$-\Delta + V$$ with the underlying Hilbert space $$\mathrm{L}^2(\mathbb{R}^3)$$, where $$V$$ is a real potential, the aim in the first chapter of the book is to approximate a $$\delta$$-Potential in 3D by scaling $$V$$. We denote $$v:=|V|^{1/2}$$, where the potential $$V$$ is an element of the Rollnik-class, i. e. real functions for which $$\int_{\mathbb{R}^6}\frac{|V(x)||V(y)|}{|x-y|^2}dxdy < \infty$$ The operator $$vG_0 v: \mathrm{L}^2(\mathbb{R}^3)\rightarrow \mathrm{L}^2(\mathbb{R}^3)$$ defined by the kernel $$(vG_0 v)(x,y)=\frac{v(x)v(y)}{4\pi|x-y|}$$ plays an important role in this setting (again $$v:=|V|^{1/2}$$). Note that the kernel is pointwise positive and the Rollnik-condition ensures that $$vG_0v$$ is Hilbert-Schmidt. Furthermore $$G_0$$ denotes the Operator given by convolution with the fundamental solution of the Laplace operator, i.e. $$G_0(x,y)=\frac{1}{4\pi|x-y|},$$ and $$G_0(-\Delta \varphi)=\varphi$$ for all $$\varphi \in \mathcal{S}$$.

The (technical) problem:

On p.21 and p.22 he uses the fact, that $$(f,vG_0vf)$$, $$f\in\mathrm{L}^2(\mathbb{R}^3)$$, can be written as $$(f,vG_0vf)=\Vert G_0^{1/2}vf\Vert^2,$$ so in a sense he uses that $$vG_0v$$ is a positive Operator and can be written as $$vG_0v=vG_0^{1/2}G_0^{1/2}v=(G_0^{1/2}v)^*(G_0^{1/2}v)$$. Also he uses that $$\Vert G_0^{1/2}vf\Vert^2=0$$ implies $$vf=0$$.

So if I understand correctly he uses that the unbounded convolution operator $$G_0$$ is exactly the same as the inverse laplacian $$(-\Delta)^{-1}=\mathcal{F}^{-1} 1/|p|^2 \mathcal{F}$$ defined by functional calculus, but this requires $$D(G_0)=D((-\Delta)^{-1})$$ and $$G_0\phi=(-\Delta)^{-1} \phi, \quad \forall \phi \in D(G_0)=D((-\Delta)^{-1}).$$ Then obviously $$G_0^{1/2}=\mathcal{F}^{-1} 1/|p| \mathcal{F}$$. I found this to be commonly used in various papers and textbooks, but never explained in detail. I see that both operators conincide on the schwartz functions, but as $$\mathcal{S}\subsetneq D((-\Delta)^{-1})$$ why does this equality of convolution operator and fourier multiplier extend?

By Michael Cwikel, “Weak type estimates for singular values and the number of bound states of Schrödinger operators” (1977) it follows that $$\mathcal{F}^{-1} 1/|p| \mathcal{F} v = (-\Delta)^{-1/2}v$$ is bounded, so the remaining parts of the statement should follow then.