Is solving a quadratic equation using Turing machine impossible?

I’ve just started Algorithms at university. There’s a task to write an algorithm for a Turing machine to solve quadratic equations. The task doesn’t specify if it’s x^2+bx+c or ax^2+bx+c. I’ve searched whole bunch of information over Russian and English Internet.

I did find articles, which say it’s not possible because we’ve got real numbers A, B, C. Please confirm if that’s true. I may not get it correct.. But I think that’s impossible. I still don’t know how to prove my thoughts.

Thanks in advance!

Expected search times with linear vs quadratic probing

Why exactly does quadratic probing lead to a shorter avg. search time than linear probing?

I fully get that linear probing leads to a higher concentration of used slots in the hash table (i.e. higher “clustering” of used consecutive indices). However, it’s not immediately trivial (to me at least) why that translates to higher search times in expectation than in quadratic probing, since in both linear and quadratic probing the first value of the probing sequence determines the rest of the sequence.

I suppose this has to do more with the probability of collisions between different probing sequences. Perhaps different auxiliary hash values are less likely to lead to collisions early in the probing sequence in quadratic than in linear hashing, but I haven’t seen this result derived or formalized.

How are Fighters Linear but Wizards Quadratic?

The phrase “Linear Fighters, Quadratic Wizards” gets bandied about a lot, but I’ve found I don’t have a good way to explain it to newer players.

The tier system post has some examples of how wizards are better than fighters in specific situations, but I don’t find the examples very satisfactory: the wizards in the examples seem to mostly rely on cheesy abuses that wouldn’t happen in an actual game. For example the post says a wizard can kill a dragon using shivering touch from Frostburn, or using mindrape and love’s pain from the Book of Vile Darkness, but many games won’t allow those books.

In an actual play scenario, with no access to any expansion books, and assuming a group of characters that aren’t grossly evil: what sorts of trends make wizards (or, more generally, full spellcasters) more powerful than non-spellcasting classes? At what character level does this start to happen, and what spells available at that level are responsible for the change?

I’m interested in responses pertaining to both 3.5e and Pathfinder; if there are important differences between the two, I’d be interested in hearing about those as well.

Mathematica returns uneditable long solutions for two simple quadratic equations

I tried to get positive solution(or any solution) of the following two quadratic equations with two variables. My code is:

Solve[(1/8)(-A1+x2+α+x1(-2+β)-2 β x2^2-θ x1^2)==0 &&       (1/16)(A1+3 x2-α-2β x2+x1(-2+3β))^2 - θ x2^2==0, {x1, x2}] 

It shows that there is large output, then i clicked show fulloutput, it took 5 minutes to display …and the result is in weird format, there is only one symbol in each line in the last part, and they are very difficult to identify, i can’t even find where x2 appears

Approximation algorithms for indefinite quadratic form maximization with linear constraints

Consider the following program: \begin{align} \max_x ~& x^TQx \ \mbox{s.t.} ~& Ax \geq b \end{align} where $ Q$ is a symmetric (possibly indefinite) matrix and the inequality is element-wise and constrains feasible solutions to a convex polytope.

This is NP-hard to solve, but what are known approximation results?

A relevant result is given by (Kough 1979). It is shown that this program can be optimized using Benders’ decomposition to within $ \epsilon$ of the optimum. However, the paper does not seem to clearly specify what this means, or the complexity of the procedure.

I believe the $ \epsilon$ -approximation is in the usual sense employed in the field of mathematical programming, that is, is $ OPT$ is the optimal value of the program, $ ALG$ is the result of the above procedure and $ MIN$ is the minimal value attainable by a feasible solution, $ $ \frac{ALG-MIN}{OPT-MIN} \geq (1-\epsilon). $ $ Or something of the sort.


  • Is the mentioned procedure a polynomial-time algorithm?
  • Are there known polynomial-time algorithms yielding approximations to the above program in the traditional sense, i.e. $ ALG \geq \alpha OPT $ for some $ \alpha < 1$ , constant or not.

Kough, Paul F. “The indefinite quadratic programming problem.” Operations Research 27.3 (1979): 516-533.

How to show that every quadratic, asymptotically nonnegative function $\in \Theta(n^2)$

In the book CLRS the authors say that every quadratic, asymptotically nonnegative function $ f(n) = an^2 + bn + c$ is an element of $ \Theta(n^2)$ . Using the following definition

\begin{align*} \Theta(n^2) = \{h(n) \,|\, \exists c_1 > 0, c_2 > 0, n_0 > 0 \,\forall n \geq n_0: 0 \leq c_1n^2 \leq h(n) \leq c_2n^2\} \end{align*}

the authors write that $ n_0 = 2*\max(|b|/a, \sqrt{|c|/a})$ .

I have difficulties proving that the value of $ n_0$ is indeed that value.

We know that $ a \ge 0$ because otherwise $ f$ would not be asymptotically nonnegative. Calculating the roots of $ f$ gives us:

\begin{align*} n_{1/2} &= \frac{-b \, \pm \, \sqrt{b^2 – 4ac} }{2a} \ &\leq \frac{|b| + \sqrt{b^2 – 4ac} }{a} \end{align*}

The case $ c \ge 0$ gives us:

\begin{align*} \frac{|b| + \sqrt{b^2 – 4ac} }{2a} \leq \frac{|b| + \sqrt{b^2} }{a} = 2\frac{|b|}{a} \end{align*}

which is two times the first argument of the $ \max$ function.

But what about the case $ c < 0$ ? How can we find an upper bound for that? Where does the value $ \sqrt{|c|/a}$ actually come from?

Convex quadratic approximation to binary linear programming

Munapo (2016, American Journal of Operations Research, purports to have a proof that binary linear programming is solvable in polynomial time, and hence that P=NP.

Unsurprisingly, it does not really show this.

Its results are based on a convex quadratic approximation to the problem with a penalty term whose weight $ \mathcal{l}$ needs to be infinitely large for the approximation to recover the true problem.

My questions are the following:

  1. Is this an approximation which already existed in the literature (I rather expect it did)?
  2. Is this approximation useful in practice? For example, could one solve a mixed integer linear programming problem by homotopy continuation, gradually increasing the weight $ \mathcal{l}$ ?

Note: After writing this question I discovered this related question: Time Complexity of Binary Linear Programming . The related question considers a specific binary linear programming problem, but mentions the paper above.

Quadratic equations using 2 different approaches

I am reading Mark Newman’s Computational Physics and at chapter 4 page 133 in Exercise 4.2 he asks

a) Write a program that takes as input three numbers, a, b, and c, and prints out the two solutions to the quadratic equation $ ax^2 + bx + c = 0$ using the standard formula $ x = −b± (b^2 − 4ac)^{1/2}/2a$ . Use your program to compute the solutions of $ 0.001x^2 + 1000x + 0.001 = 0$ .

b) There is another way to write the solutions to a quadratic equation. Multiplying top and bottom of the solution above by $ -b∓ (b^2 − 4ac)^{1/2} $ , show that the solutions can also be written as $ x = 2c/−b∓(b^2 − 4ac)^{1/2}$ . Add further lines to your program to print these values in addition to the earlier ones and again use the program to

I tried both ways and a) gives me

[-9.99989425e-13 -1.00000000e+00] and

b) [-1.00000000e-06 -1.00001058e+06]

how can I understand which one is correct ? Or why is this happening ?