## Fastest Algorithm for finding All Pairs Shortest Paths on Sparse Non-Negative Graph

As discussed here Johnson’s Algorithm can be used to solve the APSP-Problem in $$O(V^2\log V + VE)$$ in stead of $$O(V^3)$$ for Floyd-Warshall. However, Johnsons Algorithm does quite a bit of work for the case that some weights are negative. Is there a faster way to compute this given that all weights are positive?

## Maximizing a nonnegative linear function over adjacency matrices with node degree constraints

Suppose $$A$$ is an $$n$$-by-$$n$$ symmetric matrix whose entries are all nonnegative. $$A_{ii} = 0$$ for all $$i$$. We want to find an $$n$$-by-$$n$$ binary ($$0/1$$ valued) matrix $$X$$ that maximizes

$$\sum_{ij} A_{ij} X_{ij}$$

under the constraints that

1. $$X$$ is symmetric ($$X^\top = X$$);
2. Each row of $$X$$ can have at most $$k$$ ones (the rest being zero);
3. The total number of $$1$$ in $$X$$ is at most $$m$$.

Here $$k \le n$$ and $$m \le n^2$$. I can think of a dynamic programming solution if 2 and 3 are the only conditions. But the symmetry in condition 1 makes it much harder. Does there exist a polynomial algorithm which can achieve multiplicatively constant approximation bound (under conditions 1, 2, 3)? Ideally the constant is universal, not dependent on $$n$$, $$k$$, or $$m$$.

If not, is there any hope for the combination of conditions 1 and 2? The combination of 1 and 3 is trivial to handle.

Thank you.

## Given an array of non-negative intergers, find number of sub arrays of all sizes(1,2,..n) with sum less than k

My approach: arr is the input array, ans is the final array, ans[k] denotes number of subarrays of size k+1

int s =0,e=0,count=0,sum=arr[0]; int ans[n]; memset(ans,0,sizeof(ans)); while(s<n && e<n) {     if(sum<=k)     {         e++;                 if(e>=s)         {          ans[e-s-1]+=1;         }          if(e<n)         sum+=arr[e];     }     else     {         sum-=arr[s];         s++;     } }   for(int i=n-2;i>=0;i--)   ans[i] += ans[i+1];  for(int i=0;i<n;i++)  cout << ans[i] << " ";    

}

I don’t understand what are the cases that I may be missing here.

## How to show that every quadratic, asymptotically nonnegative function $\in \Theta(n^2)$

In the book CLRS the authors say that every quadratic, asymptotically nonnegative function $$f(n) = an^2 + bn + c$$ is an element of $$\Theta(n^2)$$. Using the following definition

\begin{align*} \Theta(n^2) = \{h(n) \,|\, \exists c_1 > 0, c_2 > 0, n_0 > 0 \,\forall n \geq n_0: 0 \leq c_1n^2 \leq h(n) \leq c_2n^2\} \end{align*}

the authors write that $$n_0 = 2*\max(|b|/a, \sqrt{|c|/a})$$.

I have difficulties proving that the value of $$n_0$$ is indeed that value.

We know that $$a \ge 0$$ because otherwise $$f$$ would not be asymptotically nonnegative. Calculating the roots of $$f$$ gives us:

\begin{align*} n_{1/2} &= \frac{-b \, \pm \, \sqrt{b^2 – 4ac} }{2a} \ &\leq \frac{|b| + \sqrt{b^2 – 4ac} }{a} \end{align*}

The case $$c \ge 0$$ gives us:

\begin{align*} \frac{|b| + \sqrt{b^2 – 4ac} }{2a} \leq \frac{|b| + \sqrt{b^2} }{a} = 2\frac{|b|}{a} \end{align*}

which is two times the first argument of the $$\max$$ function.

But what about the case $$c < 0$$? How can we find an upper bound for that? Where does the value $$\sqrt{|c|/a}$$ actually come from?

## How to do arbitrary non-negative powered polynomial division in Mathematica?

Suppose I have a simple polynomial in $$\{a,b\}$$, defined as $$a^k-b^k \ \forall \ k\in \mathbb{Z}^{\geq0}$$. If I know one of the factor is $$(a-b)$$, is there a way to get a representation of its remaining factors in Wolfram Language?

I know one of its representation is $$\sum_{j=0}^{k-1}{(a^{(k-1)-j}\ b^j)}$$ and Mathematica recognizes that

Sum[a^((k-1)-j) b^j,{j,0,k-1}] 

gives

(a^k - b^k)/(a - b) 

FullSimplify[(a^k-b^k)/(a-b),Assumptions->k\[Element]NonNegativeIntegers] 

It is unable to do anything.

Also what is the proper way to give assumptions?

Is the above expression intepreted differently if I give as

Assuming[k\[Element]NonNegativeIntegers,FullSimplify[(a^k-b^k)/(a-b)]] 

Also tried factoring directly without giving a single factor to no success,

Assuming[k\[Element]NonNegativeIntegers,Factor[a^k-b^k]] 

## How many are the non-negative integer solutions of 𝑥 + 𝑦 + 𝑤 + 𝑧 = 16 where x = y?

I did the following: the number of solutions desired is equal to that of non-negative integer solutions of 2x + y + z = 16. Since x is limited between 0 and 8, just look at each case. For x = 1 => C (15,1) = 15. For x = 2 => C (13.1) = 13. For x = 3 => C (11.1) = 11. For x = 4 => C (8.1) = 9. For x = 5 => C (5.1) = 7. For x = 6 => C (3.1) = 5. For x = 7 => C (1,1) = 3. For x = 0 => 1. The answer then will be: 15 + 13 +11 +9 +7 +5 + 3 + 1 = 66. But the right answer is 81. What am I doing wrong?

## Rank of a random sparse matrix with nonnegative reals

I believe this should be some standard result in random matrices theory, but my initial search failed to find a definitive answer.

The question is given a random sparse matrix $$M\in\mathbb{R}^{n\times m}$$ (in general $$m\geq n$$ having as any of its entries either 0 with probability $$1/2$$ and a positive number with probability $$1/2$$ , what is the expected rank ?

Also is the rank of this type of matrix equivalent to the corresponding binary matrix (all of the nonzero entries replaced with $$1$$) ?

My intuition is that those type of matrices are full-rank with very high probability, especially in the case $$m\gg n$$ and large $$n$$ that is also confirmed by my numerical experiments.

Best, Jacek

## Determine the generating function for the number $f_n$ of solutions of integers non-negative of the inequality

Determine the generating function for the number $$f_n$$ of solutions of integers non-negative of the inequality: $$3x_1+4x_2+x_3+7x_4\leq n$$ Approximate value of $$f_{10000}$$ ?

So I don’t have any idea of how to begin with this. The only thing that came to my mind that the solutions of the inequality would be the same of this system: $$3x_1+4x_2+x_3+7x_4+x_5\leq n$$ or the sum of the solutions with $$n,n-1,n-2,…$$

anyway, can I get a hint?

I believe I have to start doing some multiplications and arrive at some fractions looking like $$\frac{1}{1-x}$$ but I don’t know how to get there

## Solution of a manipulated equation vs the maximum eigenvalue and eigenvector of a non-negative matrix

Lets assume we have the following equation:

$$AU=\lambda U \Rightarrow\left[ \begin{array}{c|c|c} 0 &A_{12}&A_{13}\ \hline A_{21}& 0& A_{23}\ \hline A_{31}&A_{32}&0 \end{array} \right]\left[ \begin{array}{c} u_1\ \hline u_2\ \hline u_3\ \end{array} \right]=\lambda \left[ \begin{array}{c} u_1\ \hline u_2\ \hline u_3\ \end{array} \right]$$

where the left matrix, shown by $$A$$, is a non-negative irreducible block matrix and $$\left[ \begin{array}{c} u_1\ \hline u_2\ \hline u_3\ \end{array} \right]$$ is a vector that is partitioned into subvectors $$u_1$$, $$u_2$$ and $$u_3$$. Clearly, this vector is the right eigenvector of $$A$$ with eigenvalue of $$\lambda$$.

I want to find a solution for $$V$$ and $$\mu$$ in the following equation versus $$\lambda$$ and $$U$$. In particular, I am looking at the case where $$\lambda$$ is the dominant eigenvalue of $$A$$ which, according to Perron Frobenius Theorem, is real and positive with positive eigenvector $$U$$.

$$\left[ \begin{array}{c|c|c} 0 &A_{12}&A_{13}\ \hline \mu A_{21}& 0& A_{23}\ \hline \mu A_{31}&\mu A_{32}&0 \end{array} \right]\left[ \begin{array}{c} v_1\ \hline v_2\ \hline v_3\ \end{array} \right]=\mu \left[ \begin{array}{c} v_1\ \hline v_2\ \hline v_3\ \end{array} \right]$$

This problem is related to my PhD research and any suggestion is highly appreciated.

## Non-Negative irreducible matrices with random (correlated or independent) non-zero entries

Lets $$M$$ be a non-negative irreducible matrix. According to Perron-Frobenius Theorem, the maximum eigenvalue of $$M$$, $$\lambda$$, is positive and equal to its spectral radius $$\rho(M)$$.

Now assume the matrix $$M$$ is not deterministic and its nonzero elements are equal to random variables $$\tanh(x_i)$$ with $$x_i\sim N(m>0, \sigma^2)$$. However, the zero elements are the same deterministic zeros as before. My question is that what will happen to the expected value of the maximum eigenvalue if $$x_i$$s are correlated compared to the case where they are independent.

My observation is that existence of positive correlation among the non-zero entries increases the expected maximum eigenvalue compared to the case where the entries are independent. But I am not able to justify this experiment.