Correlation not found ( linear regression) problem

i am new to machine learning i am trying de develop my knowledge and skills with projects, while doing so i encountred a probleme where i didnt find a “well coreleated” varible with the target ,the highest corelation coeffcient i found was 0.44, so i did a scatter plot to determine how the 2 variables are going to behave in order to choose between a polynomial regression model or a linear regression ,so it turned out like this , i am clueless about what to do scatter plot

Sorting an array of strings by length in linear complexity

I am trying to find an algorithm to sort an array of strings by length in O(n) time complexity, and O(1) space complexity. The max length of the strings is known. Because of that, I tried using counting sort, but I failed to do in O(1) space complexity. Because multiple strings might have the same length, I need to store the strings in some kind of 2D array for each possible length, and that is O(n) space in the worst case of all strings having the same length. How would I go about solving this problem?

How can I quickly find the dual of a linear program?

In linear programming, the standard maximum form of a program (which we will call the “primal”) is

max $ c^{tr}x$ subject to

$ Ax \le b$ , $ x \ge 0$

and the standard minimum form, the dual, is

min $ b^{tr}y$ subject to

$ A^{tr}y \ge c$ , $ y \ge 0$

Given a primal program, how can I easily construct the dual without writing out a huge matrix for A with lots of 0 coefficients and different cases? For instance, is there are visual method or neat trick I can use to calculate the dual of the following program (which represents a zero-sum game variant in which one player has to use the same strategy against two other players)?

max $ z + z’$ subject to

$ z – \sum_ip_iB_{ij} \le 0$ for each $ j$

$ z’ – \sum_ip_iC_{ij} \le 0$ for each $ j$

$ \sum_ip_i=1$

$ p_i,z,z’ \ge 0$

($ B_{ij}$ and $ C_{ij}$ are $ m\times n$ matrices with positive entries and so $ p_i$ is an $ \mathbb{R}^m$ vector.)

Solving a linear program with simplex algorithm, matrix not full rank

I have that matrix named A:

\begin{bmatrix} 1 & 3 &1&0&0 \ -2&-2&0&1&0 \ 2&4&0&0&1 \end{bmatrix}

I need to solve the LP : $ $ \min: -x_1 – x_2$ $ and $ $ Ax = ( 2,2,4)^T$ $

This matrix is not full rank. Then the only way to solve that problem is to create an equivalent problem with a full rank matrix ? Here for instance :

$ $ \min \{ (c,-c) (x^+,x^-)^T : \begin{bmatrix} A & -A \ -I& 0 \ 0& -I \end{bmatrix} (x^+,x^-)^T \leq (b, 0,0)^T \} $ $

but I don’t know how to solve a problem with $ x^+$ and $ x^-$ . Would you have an example ?

So my question is how to solve such a problem, and if you’d recommand my method, how do you use it, especially concerning the $ x^+$ part.

Prove that these linear programming problems are bounded by $O(k^{1/2})$

The expanded partial sums of the Möbius inverse of the Harmonic numbers have two out of three properties in common with this set of linear programming problems:

$ $ \begin{array}{ll} \text{minimize} & \displaystyle\sum_{n=1}^{n=k} \frac{x_{n}}{n} \ \text{subject to constraints:} & k + \displaystyle\sum_{n=2}^{n=k}x_{n}=1 \ & x_1 \geq -1 \end{array}$ $

for all $ k$ and for $ n>1:$

$ $ -2(n-1) \leq x_n \leq 0 \tag{4}$ $

That is, there is one linear programming problem for each $ k$ .

Based on a OEIS search, the solutions $ f(k)$ to the linear programming problems (without the first column) appear to have the asymptotic:

$ $ f(k)=-\left(2 \sqrt{k}-2 \log \left(\sqrt{k}+1\right)-2 \gamma +1\right) \tag{5}$ $ Is it true?

Please don’t be so harsh on me. If the problem is ill defined in the latex I post the short Mathematica program from which I defined the optimization problem.

(*start*) nn = 180; TableForm[   L2 = Table[     LinearProgramming[      Table[1/n, {n, 1, k}], {Table[If[n == 1, k, 1], {n, 1, k}]}, {{1,         0}}, Table[       If[n == 1, {-1, 1}, {-2 (n - 1), 0 (n - 1)}], {n, 1, k}]], {k,       1, nn}]]; t1 = Table[Sum[L2[[n, k]]/k, {k, 2, n}], {n, 2, nn}]; t2 = Table[-(2*k^(1/2) + 1 - 2*Log[k^(1/2) + 1] - 2*EulerGamma), {k,      2, nn}]; Show[ListLinePlot[t1], ListLinePlot[t2, PlotStyle -> Red]] ListLinePlot[t1/t2] 

The blue curve is the linear programming minimum and the red curve is the asymptotic. asymptotic and linear programming minimum

Zoom in:
enter image description here

The ratio between the linear programming minimum and the asymptotic tends to one. ratio of blue and red curve

So as I said this is NOT a bound on the partial sums of the Möbius inverse of the Harmonic numbers.

The solutions $ x_n..x_k$ to the $ k$ -th linear programming problem form a number triangle:

$ $ \begin{array}{llllllllllll} 1 & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} \ 1 & -1 & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} \ 1 & -2 & 0 & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} \ 1 & -2 & -1 & 0 & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} \ 1 & -2 & -2 & 0 & 0 & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} \ 1 & -2 & -3 & 0 & 0 & 0 & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} \ 1 & -2 & -4 & 0 & 0 & 0 & 0 & \text{} & \text{} & \text{} & \text{} & \text{} \ 1 & -2 & -4 & -1 & 0 & 0 & 0 & 0 & \text{} & \text{} & \text{} & \text{} \ 1 & -2 & -4 & -2 & 0 & 0 & 0 & 0 & 0 & \text{} & \text{} & \text{} \ 1 & -2 & -4 & -3 & 0 & 0 & 0 & 0 & 0 & 0 & \text{} & \text{} \ 1 & -2 & -4 & -4 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \text{} \ 1 & -2 & -4 & -5 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{array}$ $

The first column is here equal to the all ones sequence because Mathematicas linear programming command seems to require it. But setting the constraint to begin with $ k$ (in the linear program in the beginning of the question) makes it equivalent to the first column in the numerators for partial sums of the Möbius inverse of the Harmonic numbers.

Counting only the negative entries in each row we find with a OEIS search that their number appear to be nearest integer to square root of $ n$ , and from there it becomes easy to conjecture formula $ (5)$ .

Notice also that for all $ k$ :

$ $ \left\lfloor k^{\text{epsilon}+\frac{1}{2}}+\frac{1}{2}\right\rfloor=\left\lfloor \sqrt{k}+\frac{1}{2}\right\rfloor$ $ holds for sufficiently small epsilons.

The partial sums of the Möbius inverse of the Harmonic numbers have the numerators:

$ $ J(m,k)=\begin{array}{lllllll} 1 & 0 & 0 & 0 & 0 & 0 & 0 \ 2 & -1 & 0 & 0 & 0 & 0 & 0 \ 3 & 0 & -2 & 0 & 0 & 0 & 0 \ 4 & -1 & -1 & -1 & 0 & 0 & 0 \ 5 & 0 & 0 & 0 & -4 & 0 & 0 \ 6 & -1 & -2 & -1 & -3 & 2 & 0 \ 7 & 0 & -1 & 0 & -2 & 3 & -6 \end{array}$ $

given by the sum:

$ $ \sum _{n=1}^m \text{If}[n\geq k,a(\gcd (n,k)),0]$ $

for
$ n=1..m$ ,
$ k=1..N$ ,
$ m=1..N$ . and where $ a(n)$ is the Dirichlet inverse of the Euler totient function.

The properties are:

$ $ \sum_{k=1}^{k=n} \frac{J(n,k)}{k}=\lim_{s\to 1+} \sum_{m=1}^{m=N}H_m \sum_{d\mid m}\frac{\mu(d)}{d^{s-1}}$ $ which is the partial sums of the Möbius inverse of the m-th harmonic number

$ $ \sum_{k=1}^{k=n}J(n,k)=1$ $ as in the constraint in the linear programming problem. $ $ J(n,1)=n$ $ as in the linear programming problem (but in the linear programming problem it is in the constraint and not the goal function because of some Mathematica technicality.)

The last property, for all $ n$ :

$ $ -2(k-1) \leq J(n,k) \leq 2(k-1)$ $

is conjectural and differs from the linear programming problem. This last conjectural property should not be too hard to prove.

(*Numerators of the partial sums of the Möbius inverse of the \ Harmonic numbers*)(*start*) Clear[T, n, k, a]; nn = 7; a[n_] := If[n < 1, 0, Sum[d MoebiusMu@d, {d, Divisors[n]}]] TableForm[  M = Table[    Table[Sum[If[n >= k, a[GCD[n, k]], 0], {n, 1, m}], {k, 1, nn}], {m,      1, nn}]] (*end*) 

Previously asked yesterday at Mathematics stack exchange, where I was not understood. I also asked about the notation at Mathematica stack exchange.

Decidability of linear equation about Sine and Cosine

Given integers $ n,d$ , and rational numbers $ a_i,b_i,l_{i,j},s_{i,j}$ for $ 1\leq i\leq d$ , $ 1\leq j\leq n$ , we are considering the following equation $ $ \sum_{i} [a_i \sin(\sum l_{i,j}\theta_j)+b_i \cos(\sum s_{i,j}\theta_j)]=0. $ $ The question is to ask whether there are $ \theta_j\in\mathbb{R}$ satisfy the above equation.

Is this problem decidable? If so, what does the algorithm look like?