How to find efficiently all positive linear dependencies between some vectors

I’ve got these vectors

vecs= {{0,1,0,0,0,0,0,-1,0},    {1,-1,1,0,0,0,-1,1,-1},  {1,0,-1,1,0,-1,1,0,-1},  {1,0,-1,1,0,0,-1,0,1},   {1,0,0,-1,0,1,0,0,-1},   {1,0,0,-1,1,-1,1,-1,1},  {1,0,0,0,-1,0,0,1,0},    {-1,0,1,0,0,-1,1,0,-1},  {-1,0,1,0,0,0,-1,0,1},  {-1,1,-1,1,-1,1,0,0,-1}, {-1,1,-1,1,0,-1,1,-1,1}, {-1,1,0,-1,0,1,0,-1,1},   {-1,1,0,-1,1,-1,0,1,0},  {0,-1,0,0,1,0,0,0,-1},   {0,-1,0,1,-1,1,0,-1,1},  {0,-1,0,1,0,-1,0,1,0},   {0,-1,1,-1,0,1,-1,1,0},  {0,0,-1,0,0,0,1,0,0}} 

And I would like to find all linear dependencies with positive coefficients between them. I started with

ns = NullSpace[Transpose[vecs]]  

which gave me

{{2,2,-1,0,-1,0,0,0,0,0,0,0,0,0,0,0,0,3},  {2,-1,2,0,-1,0,0,0,0,0,0,0,0,0,0,0,3,0},   {2,-1,-1,0,2,0,0,0,0,0,0,0,0,0,0,3,0,0},  {1,1,1,0,1,0,0,0,0,0,0,0,3,0,3,0,0,0},   {2,-1,-1,0,-1,0,3,0,0,0,0,0,0,3,0,0,0,0}, {-1,2,2,0,-1,0,0,0,0,0,0,3,0,0,0,0,0,0},   {-1,2,-1,0,2,0,0,0,0,0,3,0,0,0,0,0,0,0},  {-1,2,-1,0,-1,3,0,0,0,3,0,0,0,0,0,0,0,0},   {-1,-1,2,0,2,0,0,0,3,0,0,0,0,0,0,0,0,0},  {-1,-1,-1,3,2,0,0,3,0,0,0,0,0,0,0,0,0,0}} 

so there is one linear dependence with nonnegative coefficients (the fourth one). To check whether there are others, I made a system of inequalities with

ineqs = Simplify[Union[Map[# >= 0 &, Table[x[k], {k, Length[ns]}].ns]]] 

which returns

{x[1]>=0,x[2]>=0,x[3]>=0,x[4]>=0,x[5]>=0,x[6]>=0,x[7]>=0,x[8]>=0,x[9]>=0,x[10]>=0,  2 x[1]+2 x[2]+2 x[3]+x[4]+2 x[5] >= x[6]+x[7]+x[8]+x[9]+x[10],  2 x[1]+x[4]+2 (x[6]+x[7]+x[8])   >= x[2]+x[3]+x[5]+x[9]+x[10],  2 x[2]+x[4]+2 (x[6]+x[9])        >= x[1]+x[3]+x[5]+x[7]+x[8]+x[10],  2 x[3]+x[4]+2 (x[7]+x[9]+x[10])  >= x[1]+x[2]+x[5]+x[6]+x[8]} 

but my notebook runs out of memory on both Solve[ineqs] and Reduce[ineqs].

What is the proper way?

Support Vectors SVM

I have read somewhere that the value of slack variables of support vectors is not 0. Does that mean the points lying in the wrong region e.g a positive point lying in the negative region will also be a support vector? I have attached one picture as well which shows that points lying in the wrong region are also support vectors. I am looking for an explanation of this phenomenon It has 12 support vectors! the wrong point in the green region is also considered as one support vector!

What attack vectors does arbitrary JS on a user profile allow?

Consider a site for frontend devs/designers to host their portfolio apps – pages with arbitrary JS, each hosted on a user’s separate profile.

What attack vectors would that enable against the site? Some suggestions and comments:

  1. Defacing the site (user’s own profile, not interesting)
  2. Phishing (by rewriting the UI to ask for credentials while using the safe domain)
  3. Credential theft of user logged-in users, by pulling auth cookies (irrelevant if auth cookies are HTTP-only?)
  4. Request forgery (by triggering a POST request from within the approved domain)

Minimize the maximum inner product with vectors in a given set

Given a set $ S$ of non-negative unit vectors in $ \mathbb R_+^n$ , find a non-negative unit vector $ x$ such that the largest inner product of $ x$ and a vector $ v \in S$ is minimized. That is, $ $ \min_{x\in \mathbb R_+^n,\|x\|_2=1}\max_{v\in S} x^Tv. $ $

It seems like a quite fundamental problem in computational geometry. Has this problem been considered in the literature?

It can be formulated as an infinity norm minimization problem, which can in turn be expressed as a quadratically constrained LP. If the rows of matrix $ A$ are the vectors in $ S$ , we seek $ $ \begin{align} &&\min_x\|Ax\|_\infty \ \rm{s.t.} && x^Tx=1 \ && x\geq 0. \end{align} $ $ But the quadratic constraint is non-convex, so this is not very encouraging.

Joint typicality and distance between the vectors

In the book by Cover and Thomas,the author says that

We first review the single-user Gaussian channel studied in Chapter 9. P Here Y = X + Z. Choose a rate R < 12 log(1 + N ). Fix a good ($ 2^{nR}$ , n) codebook of power P . Choose an index w in the set $ 2^{nR}$ . Send the wth codeword X(w) from the codebook generated above. The receiver observes Y = X(w) + Z and then finds the index ŵ of the codeword closest to Y. If n is sufficiently large, the probability of error Pr(w $ \neq$ ŵ) will be arbitrarily small. As can be seen from the definition of joint typ- icality, this minimum-distance decoding scheme is essentially equivalent to finding the codeword in the codebook that is jointly typical with the received vector Y.

I am unable to mathematically see how $ (X^n,Y^n)$ being jointly weak typical implies that distance between $ X^n$ and $ Y^n$ is smaller than other possible $ X^n$ with which $ Y^n$ is not typical? The author had proved the capacity using jointly weak typicality.

To be more exact, can someone please how vectors $ (x_1^n,y^n)$ chosen IID from $ P_{XY}$ staisfying the first condition satisfies the second: $ $ \frac{-1}{n}\log \Pr{(x_1^n,y^n)} \approx H(X,Y)$ $ $ dist(x_1^n,y^n) < dist(x_k^n,y^n) \forall x_k^n \neq x_1^n$ and $ x_k \sim P_X $
The second condition is the minimum distance condition. $ dist()$ can be any valid measure I guess.

Algorithm for intersection point between two vectors

I’m trying to learn Computational Geometry and this formula isn’t obvious to me.
Hint: "cross" is related to the cross product of two vectors .

// returns intersection of infinite lines ab and pq (undefined if they are parallel)     point intersect(c`enter code here`onst point &a, const point &b, const point &p, const point &q)     {         double d1 = cross(p - a, b - a);         double d2 = cross(q - a, b - a);         return (d1 * q - d2 * p) / (d1 - d2);     } 

Find combination of vectors from array that sum up to s

I have an array of $ n$ $ m$ -dimensional vectors (in my case, they’re 27 dimensional). I also have an $ m$ -dimensional vector $ s$ . I want to find all combinations of $ k$ vectors from my array whose vector sum is equal to $ s$ . How to do this efficiently?

The best I could do is just a brute force which is $ O(n^k)$ and impossibly slow.

Any help is appreciated.

Possible attack vectors for a web site scraper

I’ve written a little utility that, given a web site address, goes and gets some metadata from the site. My ultimate goal here is to use this inside a web site that allows users to enter a site, and then this utility goes and gets some information: title, URL, and description.

I’m looking specifically at certain tags within the HTML, and I’m encoding the return data, so I believe I’ll be safe from XSS attacks. However, I wonder if there are any other attack vectors that this leaves me open to.

Subset of $k$ vectors with shortest sum, with respect to $\ell_\infty$ norm

I have a collection of $ n$ vectors $ x_1, …, x_n \in \mathbb{R}_{\geq 0}^{d}$ . Given these vectors and an integer $ k$ , I want to find the subset of $ k$ vectors whose sum is shortest with respect to the uniform norm. That is, find the (possibly not unique) set $ W^* \subset \{x_1, …, x_n\}$ such that $ \left| W^* \right| = k$ and

$ $ W^* = \arg\min\limits_{W \subset \{x_1, …, x_n\} \land \left| W \right| = k} \left\lVert \sum\limits_{v \in W} v \right\rVert_{\infty}$ $

The brute-force solution to this problem takes $ O(dkn^k)$ operations – there are $ {n \choose k} = O(n^k)$ subsets to test, and each one takes $ O(dk)$ operations to compute the sum of the vectors and then find the uniform norm (in this case, just the maximum coordinate, since all vectors are non-negative).

My questions:

  1. Is there are a better algorithm than brute force? Approximation algorithms are okay.

One idea I had was to consider a convex relaxation where we assign each vector a fractional weight in $ [0, 1]$ and require that the weights sum to $ k$ . The resulting subset of $ \mathbb{R}^d$ spanned by all such weighted combinations is indeed convex. However, even if I we can find the optimum weight vector, I am not sure how to use this set of weights to choose a subset of $ k$ vectors. In other words, what integral rounding scheme to use?

I have also thought abut dynamic programming but I’m not sure if this would end up being faster in the worst-case.

  1. Consider a variation where we want to find the optimal subset for every $ k$ in $ [n]$ . Again, is there a better approach than solving the problem naively for each $ k$ ? I think there ought to be a way to use the information from runs on subsets of size $ k$ to those of size $ (k + 1)$ and so on.

  2. Consider the variation where instead of a subset size $ k$ , one is given some target norm $ r \in \mathbb{R}$ . The task is to find the largest subset of $ \{x_1, …, x_n\}$ whose sum has uniform norm $ \leq r$ . In principle one would have to search over $ O(2^n)$ subsets of the vectors. Do the algorithms change? Further, is the decision version (for example, we could ask if there exists a subset of size $ \geq k$ whose sum has uniform norm $ \leq r$ ) of the problem NP-hard?

Addition of vectors defined over complex numbers

I want to do the following operation: Given $ k$ vectors $ \{|v_i\rangle \}$ each of dimension $ d$ ($ k < d$ ) where $ i$ denotes the vector’s index, I want to add them. Symbolically I want to perform the following operation:

$ $ |v\rangle = \sum_{i=1}^k c_i |v_i\rangle $ $

In the following case $ c_i$ ‘s and $ v_{ij}$ ‘s are all complex numbers. I have defined a vector “Call” such that,

Call = {c1,c2....ck} Vall = {v1,v2,v3...vk} 

Note that each vector $ v_i$ is defined to be

vi = {vi1, vi2,...vid} 

So basically I want the final expression to be something like this

v = {c1 v11 + c2 v21...+ ck vk1, c1 v12 + c2 v22...+ ck vk2,........ , ck v1d + c2 v2d...+ ck vkd} 

How do I implement the same?

Additional detail: In my case, $ k$ is 104, and $ d$ is 256.