## SOLVE SYSTEM OF EQUATION SOLVE BY EULER MARUYAUMA OR MILSTEIN APPROXIMATION

APPROXIMATION OF EULER MARUYAUMA AND MILSTEIN METHOD. HOW DO IS SOLVE SYSTEM OF EQUATIONS IN ITO PROCESS METHOD OF EULER MARUYAUM AND MILSTEIN APPROXIMATION BELOW PICTURE IS GIVEN…

## Approximation of a function part of the input of NMaximize

I need to maximize the following function (the input to NMaximize below)

    NMaximize[{((1/3) + f[p]*(1 - p)*(((1/(Sqrt[2]/2))*p)^2-1))/(1+(1-p)*     (((1/(Sqrt[2]/2))*p)^2 - 1)), p >= Sqrt[2]/2, p <= 1}, {p}] 

where $$f(\cdot)$$ is defined as follows

    f[p_?NumericQ] := NMinimize[{-((a + a^2 + b - 2 a b + b^2 + c - 2 a c - 2 b c + c^2)     /((-1 + a) (a + b + c))), 0 <= a, a <= b, b <= c, c <= 1, c <= a + b, (a + b + c)/3 <= p},      {a, b, c}, Method -> "DifferentialEvolution"][[1]] 

However, as expected, the computation does not end and there are several alert messages. For the underlying mathematical problem I am trying to solve, I could replace $$f(\cdot)$$ by a simple function $$g(\cdot)$$ that approximates it, but I need $$g(p)\le f(p)$$ for all $$p\in[0,1]$$. I tried to use InterpolatingPolynomial with a few values of $$f(\cdot)$$. However, $$f(\cdot)$$ is neigher concave nor convex in $$[0,1]$$, and I am struggling to obtain a good approximation of $$f(\cdot)$$ which satisfies the $$g(p)\le f(p)$$ in $$[0,1]$$.

Do you know how any method for generating such approximation function $$g(\cdot)$$ (or solving the maximization problem in a different way)?

## Approximation concerning Asymmetric TSP, Symmetric TSP, and Metric TSP

I always considered Symmetric TSP to be inapproximable in general, and thus by extension Asymmetric TSP as well. Once you add the condition of the triangle inequality however, you obtain Metric TSP (which can be Symmetric or Asymmetric), which is approximable (e.g. Christofides algorithm).

However, I’m having doubts after finding the following paper :

An improved approximation algorithm for ATSP
Vera Traub, Jens Vygen (https://arxiv.org/pdf/1912.00670.pdf)

In their paper, there is no mention of Metric TSP, or the triangle inequality. Does this mean that I’m misunderstanding, i.e. Asymmetric TSP is in fact approximable, even without the triangle inequality?

## prove: max (w(E), w(E)) is a 1/2 approximation to the value OPT

Hey I would like to find a answer for b. for a look to the picture that is my answer for it. But I dont habe any Idea how i can solve this. Thank you guys. (I had to translate it to english maybe it will be hard to understand it sorry about that).

## JS Approximation

Consider the following vorace algorithm for Job Scheduling. For each new task, assign the task to processor with the shortest uptime. how prove that this algorithm has a approximation ration of 2 ?

Suppose that once the algorithm is completed, processor $$1$$ is the busiest and assume task $$l$$ is the last task assigned to it. Let $$s$$ be the time that was used on processor $$1$$ before adding task $$i$$. The algorithm has therefore found a valuable solution $$u_1 = s + t_\ell$$.

I’m stuck on how to show that :

• for everything $$1 \leq j \leq k$$ we have $$s \leq \sum_{i: \alpha(i)=j}t_i$$.
• $$s \leq 1/k \; \sum_i t_i \leq u^*$$ and $$t_\ell \leq u^*$$ and use these results to limit $$u_1$$ ($$u^*$$ is the optimal value). ​

Here is the definition of the Job Scheduling problem

Input: A set of $$n$$ tasks of length $$t_1, t_2, \ldots t_n \in \mathbb N$$ and $$k$$ processors.

A feasible solution is a function $$\alpha: \{1, \ldots ,n\} \rightarrow \{1, \ldots k\}$$ which assigns each task to a processor.

The usage time $$u_j$$ of a processor $$j$$ is the sum of the lengths of all the tasks assigned to it, that is to say that $$u_j = \sum_{i: \alpha(i)=j}t_i$$.

We try to minimize $$\max_j u_j$$, that is to say the time of use of the most used processor.​

## If there is an polynomial time approximation to an NP-complete problem, is P approximately NP?

Deciding bipartite hypergraph coloring is NP-hard:

While for bipartite graphs a 2-coloring can be found in linear time, it was shown by Lovasz [10] that the problem to decide whether a given k-uniform hypergraph is bipartite is NP-complete for all k≥3.

Bipartite hypergraphs are colorable in expected (average) polynomial time:

The purpose of this note is to present an algorithm that colors a hyper-graph chosen uniformly at random from the family of all labeled, 3-uniform, bipartite hypergraphs on n vertices in O(n^5 * log (2n)) expected time.

Does this imply that P is approximately NP?

Source: https://www.math.uni-hamburg.de/home/schacht/abstracts/09eurocomb.pdf

## Piecewise linear approximation for set of points

If we have a set of points $$A$$ and a known algorithm $$Bestfit(A)$$ to find the best-fit straight line-segment through $$A$$, what would be a good algorithm to construct piece-wise linear segments (with at most $$k$$ segments) to give a better approximate of the set of points and reduce the error $$E$$?

## Approximation Set Covering Problem

I am studying approximation Set Covering problem, from “Introduction to Algorithms by Thomas H. Cormen” book. What I cannot understand is why they use harmonic numbers to the proof? What harmonic numbers have to do with this?? I am totally confused..

## Difference between the north-west corner method and Vogel Approximation Method

The North-west Corner Method returns $$z = 5425$$.

The Vogel Approximation Method returns $$z = 4525$$.

Which is correct?

## Approximation algorithms for indefinite quadratic form maximization with linear constraints

Consider the following program: \begin{align} \max_x ~& x^TQx \ \mbox{s.t.} ~& Ax \geq b \end{align} where $$Q$$ is a symmetric (possibly indefinite) matrix and the inequality is element-wise and constrains feasible solutions to a convex polytope.

This is NP-hard to solve, but what are known approximation results?

A relevant result is given by (Kough 1979). It is shown that this program can be optimized using Benders’ decomposition to within $$\epsilon$$ of the optimum. However, the paper does not seem to clearly specify what this means, or the complexity of the procedure.

I believe the $$\epsilon$$-approximation is in the usual sense employed in the field of mathematical programming, that is, is $$OPT$$ is the optimal value of the program, $$ALG$$ is the result of the above procedure and $$MIN$$ is the minimal value attainable by a feasible solution, $$\frac{ALG-MIN}{OPT-MIN} \geq (1-\epsilon).$$ Or something of the sort.

Questions:

• Is the mentioned procedure a polynomial-time algorithm?
• Are there known polynomial-time algorithms yielding approximations to the above program in the traditional sense, i.e. $$ALG \geq \alpha OPT$$ for some $$\alpha < 1$$, constant or not.

Kough, Paul F. “The indefinite quadratic programming problem.” Operations Research 27.3 (1979): 516-533.