## Minimum vertex cover with minimum overlap of one side

In a bipartite graph $$(X+Y,E)$$, it is possible to find a minimum vertex cover in polynomial time, by Konig’s theorem.

Suppose the minimum size is $$s$$. Is there an efficient algorithm for finding a vertex cover $$S$$ of size $$s$$, such that $$S\cap X$$ has a minimum cardinality?

What I could do so far is to formulate this problem as an ILP where each vertex $$u\in X$$ has a variable $$x_u$$ and each vertex $$v\in Y$$ has a variable $$y_v$$. Its LP relaxation is:

\begin{align} \text{minimize} && \sum_{u\in X} x_u && \ \text{subject to} && x_u + y_v \geq 1 && \forall (u,v)\in E \ && \sum_{u\in X} x_u + \sum_{v\in Y}y_v = s \ && 1\geq x_{u}\geq 0, 1\geq y_v \geq 0 && \forall u\in X, v\in Y \end{align}

But I do not know how to check whether this ILP has an integer solution.

Is there a better approach to this problem?

## Minimum 1-D finite pavement to fit in varying-length K bricks

Suppose you have a set of $$k$$ bricks, each of varying sizes. we want to fit all these brings one by one on a straight pavement of length $$N$$. we know the sizes of each brick but we do not know

a) the order in which they are placed, and

b) the positions on the $$N$$-length pavement in which they will be placed. What is the minimum length of $$N$$ such that we can play any of the $$k$$ bricks in any order and in any position?

## The minimum of the reciprocals of some Poisson random variables

Let $$X_1,\dots,X_k$$ denote a collection of independent samples of a Poisson random variable whose mean also happens to be equal to $$k$$. Does the quantity $$k\boldsymbol{E}\min\left\{ \frac{1}{1+X_{1}},\dots,\frac{1}{1+X_{k}}\right\}$$ have a strictly positive limit as $$k$$ becomes large?

## Integer linear programming can’t find global minimum

I was working with Project Euler Problem 418 using Mathematica and got in trouble.

I wrote a function to find the unique factorization triple which minimizes c / a for integer n.

FactorizationTriple[n_] :=   Block[{factor, base, exponent, l, e, varmat, vars, cons1, cons2, cons3, solve, a, b, c},   factor = FactorInteger[n];   base = factor[[All, 1]];   exponent = factor[[All, 2]];   l = Length[factor];   varmat = Table[e[i, j], {i, 1, 3}, {j, 1, l}];   vars = Flatten[varmat];   cons1 = Thread[0 <= vars];   cons2 = Thread[exponent == Total[varmat, {1}]];   cons3 = {varmat[[1]].N[Log[base]] <= varmat[[2]].N[Log[base]] <= varmat[[3]].N[Log[base]]};   solve = FindMinimum[{varmat[[3]].N[Log[base]] - varmat[[1]].N[Log[base]], Join[cons1, cons2, cons3], Element[vars, Integers]}, vars];   a = Times @@ (Power @@@ ({base, Table[e[1, j], {j, 1, l}] /. solve[[2]]}\[Transpose]));   b = Times @@ (Power @@@ ({base, Table[e[2, j], {j, 1, l}] /. solve[[2]]}\[Transpose]));   c = Times @@ (Power @@@ ({base, Table[e[3, j], {j, 1, l}] /. solve[[2]]}\[Transpose]));   {a, b, c}] 

Since both f and cons in FindMinum are linear, I thought it uses Method->"LinearProgramming" and I expected it to return a global minimum.

FactorizationTriple does work when n = 165 or 100100 or 20!, but it can’t give me the correct answer when n = 43!:

AbsoluteTiming[FactorizationTriple[43!]] {1044.17, {392385912744443904, 392388272221065120, 392389380337500000}} 

The correct answer is {a, b, c} = {392386762388275200, 392388272221065120, 392388530688000000}.

Questions:

1. Should I use NMinimize or LinearProgramming instead? (I had some try but failed.)
2. How to set the options in FindMinum?
3. How to improve the efficiency of FactorizationTriple? (It’s too slow now.)

## Minimum number of scenarios to run to fully test a page

I run a testing application that, using Selenium, runs testing “scenarios” each of which visits a bunch of pages on a website. Each page is modeled by a Java file that contains locators, each of which are used to locate certain elements on the page maybe to verify that they exist or, beyond that, that they work as intended. Some elements may always show up, and some may only show up depending on the scenario you are running. Let’s say the page we would like to test in question is the “login” page, and we are talking about all the scenarios that visit that page.

Bear in mind, some scenarios may visit the login page, but only a small subset of those scenarios are actually intended to test the login page.

Let’s say each page is modeled by a Java source file that contains a bunch of locators intended to locate elements on that page.

Each scenario is intended to test a different overall functionality on the website, but as far as the login portion of each scenario is concerned, multiple scenarios might be totally redundant (only one of these redundant scenarios need to be run to achieve the exact same level of interaction). Let’s say one set of redundant scenarios only minimally enters a username/password and then clicks the log in button. So our Java page always has the exact same sets of locators locating the same elements and not locating any elements for those redundant scenarios.

Let’s say 100 of your scenarios visit the login page. Let’s also say you ran all 100 scenarios to fully interact with that page, but only 10 of them needed to be run to achieve the same thing IE only 10 needed to be run to fully test the page.

Let’s say each locator in the Java file modelling the login page is marked with the scenarios where it successfully located an element on the page. Each locator could have anywhere from 1 to 100 scenarios tagged to it.

Having already run the 100 scenarios, how do you go about knowing what those 10 scenarios are for future benefit? What sort of algorithm would achieve this? Thanks!

## Only allow to proceed to checkout if a minimum order total for specific products is reached

I have configurable products where users can select different colors. There are many RAL colors which the users can choose. These colors have RAL in the name, so it is easy to identify them, e.g. “White (RAL 9016)”

But the users should only be allowed to order a product with RAL color, if the total value of all products where he selected a RAL color, is >= 200. Even if he already has other products with a greater total sum of 200.

If he goes to his cart and press on the “go to checkout” button even though he has RAL products in his cart and the value of them is less than 200, then a warning should show.

The warning should say for example “The total sum of your RAL products is 180€, you need at least RAL products worth 200€ in your cart to proceed. Do you want to pay the extra 20€ and proceed?”

So the users should be asked if he want to pay the difference from his total RAL product sum to 200€ (in this example it would be 20€). If he clicks yes, then 20€ should be added to the total sum and he can proceed to checkout and order.

I hope someone can help me to solve this or give me tips.

## What is the minimum length for a UUID?

From general observation I’ve come across the standard to be 36. I was looking to incorporate a uuid in my urls but didn’t want it that long. Is there a minimum where I can still keep the uuid unique?

## How to find efficiently the minimum modification to avoid close consecutive numbers?

I have an array of sorted numbers:

arr = [-0.1, 0.0, 0.5, 0.8, 1.2] 

I want the difference (dist below) between consecutive numbers for that array to be above or equal a given threshold. For example, if threshold is 0.25:

dist = [0.1, 0.5, 0.3, 0.4] # must be >=0.25 for all elements 

arr[0] and arr[1] are too close to each other, so one of them must be modified. In this case the desired array would be:

valid_array = [-0.25, 0.0, 0.5, 0.8, 1.2] # all elements distance >= threshold 

In order to obtain valid_array, I want to modify the minimum amount of elements in arr. So I substract 0.15 from arr[0] rather than, say, substract 0.1 from arr[0] and add 0.05 to arr[1]:

[-0.2, 0.05, 0.5, 0.8, 1.2] 

Previous array is also valid, but we have modified 2 elements rather than one. In order to obtain valid_array, I already have a brute force solution which works fine, but it is quite slow for large arrays. My questions are:

What is the time complexity of that brute force solution?

Does a more efficient algorithm even exist?

# Edit

First, I need to clarify what I mean by difference, which I define the same way as in here, so out[n] = a[n+1] - a[n]. The fact that all elements in that difference must be above (or equal) threshold implies that valid_array is also sorted.

Second, the number of modifications (which must be minimized) is obtained by comparing elementwise the original arr and valid_array

## Removing max number of edges while keeping minimum distances

Suppose we have a graph with vertices from 1 to n.The graph is undirected and the starting point is 1 and we have path from 1 to any other vertex.We also have positive weight on each edge and there are two types of edges – black and red. The black edges are in the form (1,x) where x is a vertex and red edges can be any pair (x,y) .My question is how can I find the maximum number of black edges I can remove so that the minimal distance from 1 to any other vertex is preserved?