Minimum spanning tree formulation

Can any expert explain the reasoning behind the constraint in the following formulation of the minimum spanning tree?

To formulate the minimum-cost spanning tree (MST) problem as an LP, we associate a variable $ x_e$ with every edge $ e \in E$ . Each spanning tree $ T$ corresponds to its incidence vector $ x^T$ , which is defined by $ x^T_e = 1$ if $ T$ contains $ e$ and $ x^T_e = 0$ otherwise. Let $ \Pi$ denote the set of all partitions of the vertex set $ V$ , and suppose that $ \pi \in \Pi$ . The rank $ r(\pi)$ of $ \pi$ is the number of parts of $ \pi$ . Let $ E_\pi$ denote the set of edges whose ends lie in different parts of $ \pi$ . Consider the following LP: $ $ \begin{align*} \min &\sum_{e \in E} c_ex_e \ \text{s.t. } &\sum_{e \in E_\pi} x_e \geq r(\pi) – 1 \quad \forall \pi \in \Pi, \ & x \geq 0. \ \end{align*} $ $

Is the formulation of the dual to the Kantorovich problem unique?

Lots of definitions follow, the question is further down.

Let $ \Gamma(\mu_1,\mu_2)$ be the set: $ $ \Gamma(\mu_1,\mu_2) = \{ \gamma \text{ probability measure on } \mathbb{R}^d \times \mathbb{R}^d \text{ with }\mu_1\text{ and }\mu_2\text{ as marginals }\}$ $

The Primal Kantorovich Problem is to find $ \gamma \in \Gamma(\mu_1, \mu_2)$ which minimizes the functional $ C_K[\gamma]$ which is explicitly given as: $ $ \inf_{\gamma \in\Gamma(\mu_1,\mu_2)}C_K[\gamma] = \int_{\mathbb{R}^d \times \mathbb{R}^d}c(x,y) \gamma(dx,dy)$ $ where $ c(x,y): \mathbb{R}^d \times \mathbb{R}^d \rightarrow \mathbb{R}^d$ has some nice properties (i.e. lsc + other good stuff).

The way this problem is formed, it is a linear function over a convex set. So duality theory from convex analysis can be used to analyze the existence and uniqueness of this problem’s minimizer. I am reading a set of notes that uses the following procedure to obtain a formulation of the dual statement:

Define an indicator function $ $ \mathbf{1}(\gamma) = \begin{cases} 0 & \gamma \in \Gamma(\mu_1,\mu_2) \ + \infty & \gamma \notin \Gamma(\mu_1,\mu_2) \end{cases} $ $ Then express this indicator function somehow over $ C_0(\mathbb{R}^d) \times C_0(\mathbb{R}^d)$ . To do this write: $ $ \mathbf{1}(\gamma) = \sup_{(u,v) \in C_0(\mathbb{R}^d) \times C_0(\mathbb{R}^d)}L_\gamma(u,v)$ $ where $ $ L_\gamma(u,v) = \int_{\mathbb{R^d}}u(x) \,d\mu_1(x) – \int_{\mathbb{R^d} \times\mathbb{R}^d}u(x) \,d\gamma(dx,dy) + \int_{\mathbb{R^d}}v(y) \,d\mu_2(y)- \int_{\mathbb{R^d} \times \mathbb{R}^d}v(y) \,d\gamma(dx,dy)$ $

My question:

Presumably, there are a variety of ways to express this indicator function. Why should we choose $ C_0(\mathbb{R}^d) \times C_0(\mathbb{R}^d)$ as the set over which to express the indicator function? In particular, if I had chosen a different set, I would obtain a different dual problem right? Does this mean there are many dual problems for a given primal problem?

Linear mathematical formulation of a congestion game

I need to implement the following congestion game in AMPL:

Let $ J=\{1,2,3,4\}$ be a set of jobs and let $ M=\{1,2,3\}$ be a set of machines.

  • Job 1 can be solved by machines 1 and 2.
  • Job 2 can be solved by machines 1 and 3.
  • Job 3 can be solved by machines 2 and 3.
  • Job 4 can be solved by machines 2 and 3.

Each job uses only one machine.

The time required by each machine to solve a job depends on its congestion (namely, the number of different jobs assigned to the machine) as follow:

  • Machine 1: $ time_1=[1,3,5,7]$
  • Machine 2: $ time_2=[2,4,6,8]$
  • Machine 3: $ time_3=[3,4,5,6]$

The element $ i$ of the vector $ time_m$ is equal to the time required by machine $ m$ given a congestion equal to $ i$ .

[e.g.] if machine 1 solves job 1 and 2, thus the congestion of machine 1 is equal to 2, then the resolution of these jobs (1 and 2) requires time equal to $ time_1(2)=3$ . (indexing starts from 1)

The aim is to solve all the jobs in a way such that the maximum time among the machines is minimum.

The implementation problem is that I need to use as index the variable “congestion” and that’s not allowed in AMPL.

Is there any way to implement this game as a linear problem in AMPL? What’s the mathematical linear programming formulation without using variable as indexes of the “time matrix” (the matrix having as rows $ time_i$ )?

Linear system formulation for a PDE with Neumann boundary condition

PDE with Dirichlet boundary condition can be written as a linear system:

$ Au(x)=f(x); \ \forall x \in \Omega$ ,

s.t. $ u(x)=g(x); \ \forall x \in \Gamma$ .

This can be solved for instance using the Jacobi method with a projection on the boundary value at every iteration. In this case, the $ A$ is independent of the domain ($ \Omega$ ) and its boundary ($ \Gamma$ ). What will be the equivalent projection-based linear system for Neumann boundary condition,

$ u^{‘}(x) = g(x); \ \forall x \in \Gamma$ ?

Traveling Salesman Problem with profit and time limit as ILP formulation

How to formulate the following problem?

The salesman gains a profit $ p_{i}$ when visiting a city i, trip between city i and city j costs $ c_{ij}$ and takes $ t_{ij}$ time. The trip must not exceed a time limit T. The difference between profit and costs must be maximized.

I’ve formulated this problem as follows:

maximize $ \sum _{i=1}^{n} \sum _{j=1}^{n}p_{ij}x_{ij}- \sum _{i=1}^{n} \sum _{j=1}^{n}c_{ij}x_{ij}$

can i formulate trip time as $ \sum _{i=1}^{n} \sum _{j=1}^{n}t_{ij}x_{ij}$ and add the constraint $ \sum _{i=1}^{n} \sum _{j=1}^{n}t_{ij}x_{ij} \leqslant T$ ?

I’ve read also about the Time Dependent TSP (TDTSP), but i’m not sure it’s my case…

Mixed integer formulation of union of polytopes?

Given $ t$ different unbounded polyhedra $ P_1:A^{(1)}x^{(1)}\leq b^{(1)},\dots,P_t:A^{(t)}x^{(t)}\leq b^{(t)}$ we are looking for the representation of $ \bigcup_{i=1}^tP_i$ (not their convex hull) with mixed integer programming.

  1. What is the standard way to do this by mixed integer programming with fewest number of additional integer variables?

  2. When is it possible do this with only $ O(t^\alpha)$ additional integer variables where $ \alpha\in(0,1)$ ?

I found some review where they say we can do this with $ O(\log t)$ integer variables if the polyhedra have a common recession cone.

Averaging for the Salzborn formulation of the union-closed set conjecture

I read in this thesis about Frankl’s conjecture on page 44 that the Salzborn formulation (see version ii. on that page) can be expressed as:

For every union-closed family $ \mathcal{F}$ with $ \vert\mathcal{F}\vert = n$ and $ \emptyset \in \mathcal{F}$ there exists a $ \cup$ -irreducible set $ M$ of $ \mathcal{F}$ such that $ \vert\mathcal{F}_{\not\supseteq M}\vert \ge \frac{n}{2}$ .

Thus if $ \mathcal{F}$ does not satisfy the conjecture then:

$ $ \sum_{M \in \mathit{J}(\mathcal{F})}{\vert\mathcal{F}_{\not\supseteq M}\vert} \lt \frac{n\vert\mathit{J}(\mathcal{F})\vert}{2} \tag 1$ $

where $ \mathit{J}(\mathcal{F})$ is the family of basis sets.

I tried some examples but could not find any verifying $ (1)$ . I know the averaging argument does not work on elements for the usual formulation of the conjecture, but I would like a way to construct examples satisfying $ (1)$ .

Deformation gradient conservation law from Larangian to Eulerian formulation

In the following, I use the standard notation for (solid) mechanics and conservation laws, i.e. $ F$ the formation gradient, $ H$ the cofactor, $ v$ the velocity field and $ J$ the Jacobian. Moreover, $ X$ is for the initial configuration where functions and fields are expressed using a bar and $ x$ is for the mapped {current) configuration.

The rate of deformation gradient can be expressed as: $ \frac{\partial F}{\partial t} = \nabla \bar{v} $ .

Now I know how to derive the total (TLF) $ \frac{d}{dt}\int_V F \; dV = \int_{\partial V} \bar{v}\otimes N \; dA$ and updated (ULF) $ \int_{v(t)} J^{-1} \left.\frac{\partial F}{\partial t}\right\vert_{X} \; dv = \int_{v(t)} \nabla\cdot (v\otimes H) dv$ lagrangian formulations.

I would like to get, if it exists, the Eulerian formulation of the above, as well as the ALE and Total ALE formulation of it.

To do so, I tried to start from TLF, I switch to the current configuration $ v(t)$ using $ J$ and I use the Reynold’s transport theorem to get rid of the total time derivative on the integral and to get the convective term. I end up with:

$ \int_{v(t)} \frac{\partial J^{-1}F}{\partial t} + \nabla\cdot(J^{-1}F\otimes v) \; dv = \int_{v(t)} \nabla\cdot (v\otimes H^{-1}) \; dv$

Are the mathematics correct? And can I reach these formulations? May I say that I prefer integral forms..