Can Mathematica solve matrix-based parametric (convex or semidefinite) constrained optimization problems?

I have gone through Mathematica’s documentation and guides on ConvexOptimization, ParametricConvexOptimization and SemidefiniteOptimization. I am also running the latest version of Mathematica.

The kind of matrix-based, parametric, constrained optimization problems I want to solve is this:

\begin{equation} \min_{X_j, \, L_{jk}} \text{Tr}(A L) \ \text{such that, } X_j \text{ and } L_{jk} \text{ are } 4\times4 \text{ Hermitian matrices} \ G_k \cdot X_j = \delta_{j k}\ L:=\begin{bmatrix}L_{11} &L_{12} &L_{13} \ L_{12} &L_{22} &L_{23} \ L_{13} &L_{23} &L_{33}\end{bmatrix} \succeq \begin{bmatrix} X_1 \ X_2 \ X_3 \end{bmatrix}\begin{bmatrix}X_1 &X_2 &X_3\end{bmatrix} \end{equation} where the variables to be optimized over are $ X_j$ and $ L_{jk}$ ($ j$ and $ k$ run from 1 to 3), which are themselves matrices! The matrices $ G_k$ and $ A$ depend on some parameter $ \alpha$ (and satisfy additional properties).

I have been able to run this kind of optimization in MATLAB, and also a much simpler version of this in Mathematica, where $ j, k=1$ and the parameter value is fixed, using,

ConvexOptimization[   Tr[\[Rho]0 .      v11], {VectorGreaterEqual[{v11, x1}, "SemidefiniteCone"] &&       Tr[\[Rho]0 . x1] == 0  && Tr[\[Rho]1 . x1] == 1   &&      Element[x1, Matrices[{4, 4}, Complexes, Hermitian]] &&      Element[v11, Matrices[{4, 4}, Complexes, Hermitian]]} , {x1,     v11}]] 

However I simply can not get the full problem to run on Mathematica, using either ConvexOptimization[ ] (at fixed parameter values), ParametricConvexOptimization[ ], SemidefiniteOptimization[ ], or Minimize[ ].

ConvexOptimization[ ] at fixed parameter values for $ j, k = 1, 2$ shows the warning ConvexOptimization::ctuc: The curvature (convexity or concavity) of the term X1.X2 in the constraint {{L11,L12},{L12,L22}}Underscript[\[VectorGreaterEqual], Subsuperscript[\[ScriptCapitalS], +, \[FilledSquare]]]{{X1.X1,X1.X2},{X1.X2,X2.X2}} could not be determined.

Minimize[ ] shows the error Minimize::vecin: Unable to resolve vector inequalities ...

And ParametricConvexOptimization[ ] and SemidefiniteOptimization[ ] simply return the input as output.

Has anyone got some experience with running such matrix-based optimizations in Mathematica? Thanks for your help.

EDIT 1: For the two-dimensional case ($ j, k=1, 2$ ) I tried (with $ A$ the identity matrix, and at fixed parameter value):

ConvexOptimization[  Tr[Tr[ArrayFlatten[{{L11, L12}, {L12,        L22}}]]], {VectorGreaterEqual[{ArrayFlatten[{{L11, L12}, {L12,          L22}}], ArrayFlatten[{{X1 . X1, X1 . X2}, {X1 . X2,          X2 . X2}}]}, "SemidefiniteCone"] &&  Tr[\[Rho]0 . X1] == 0  &&     Tr[\[Rho]0 . X2] == 0 && Tr[\[Rho]1 . X1] == 1  &&     Tr[\[Rho]1 . X2] == 0  && Tr[\[Rho]2 . X1] == 0  &&     Tr[\[Rho]2 . X2] == 1  &&     Element[X1, Matrices[{4, 4}, Complexes, Hermitian]] &&     Element[X2, Matrices[{4, 4}, Complexes, Hermitian]] &&     Element[L11, Matrices[{4, 4}, Complexes, Hermitian]] &&     Element[L12, Matrices[{4, 4}, Complexes, Hermitian]]  &&     Element[L22, Matrices[{4, 4}, Complexes, Hermitian]] }, {X1, X2,    L11, L12, L22}] 

and for the three-dimensional case ($ j, k = 1, 2, 3$ ) with variable parameter value and $ A$ the identity matrix, I tried

ParametricConvexOptimization[  Tr[Tr[ArrayFlatten[{{L11, L12, L13}, {L12, L22, L23}, {L13, L23,        L33}}]]], {VectorGreaterEqual[{ArrayFlatten[{{L11, L12,         L13}, {L12, L22, L23}, {L13, L23, L33}}],      ArrayFlatten[{{X1}, {X2}, {X3}}] .       Transpose[ArrayFlatten[{{X1}, {X2}, {X3}}]]},     "SemidefiniteCone"],  Tr[\[Rho]0 . X1] == 0 ,    Tr[\[Rho]0 . X2] == 0  , Tr[\[Rho]0 . X3] == 0 ,    Tr[\[Rho]1 . X1] == 1 , Tr[\[Rho]1 . X2] == 0  ,    Tr[\[Rho]1 . X3] == 0  , Tr[\[Rho]2 . X1] == 0 ,    Tr[\[Rho]2 . X2] == 1  , Tr[\[Rho]2 . X3] == 0 ,    Tr[\[Rho]3 . X1] == 0 , Tr[\[Rho]3 . X2] == 0  ,    Tr[\[Rho]3 . X3] == 1 }, {Element[X1,     Matrices[{4, 4}, Complexes, Hermitian]],    Element[X2, Matrices[{4, 4}, Complexes, Hermitian]],    Element[X3, Matrices[{4, 4}, Complexes, Hermitian]],    Element[L11, Matrices[{4, 4}, Complexes, Hermitian]],    Element[L12, Matrices[{4, 4}, Complexes, Hermitian]],    Element[L13, Matrices[{4, 4}, Complexes, Hermitian]],    Element[L22, Matrices[{4, 4}, Complexes, Hermitian]],    Element[L23, Matrices[{4, 4}, Complexes, Hermitian]],    Element[L33, Matrices[{4, 4}, Complexes, Hermitian]]}, {\[Alpha]}] 

Here, the $ \rho_{k}$ matrices are the $ G_k$ matrices.

How does heavily constrained delaunay triangulation work?

In short: I’m trying to find an algorithm for performing a Delaunay triangulation of a heavily constrained polygon, with the understanding that most of the resulting triangles will be illegal (non-Delaunay) due to the constraints. Making the effort should still help reduce generation of long/thin triangles where possible.

I’m trying to understand how constrained Delaunay triangulation works, within the context of game maps where the available polygon to be triangulated is concave and extremely constrained.

I first looked into regular Delaunay triangulation, and there are multiple methods available, but they all assume a point cloud where the goal is to create ideal triangles without constraints.

Then I looked into constrained Delaunay triangulation, but there is very little available I could find beyond academic papers, and many of those simply perform a regular Delaunay triangulation, then cut triangles that intersect constraints and re-triangulate around them. This seems backwards and inefficient to me when literally ALL of the triangles I want to end up with will be constrained.

I saw a GDC video, where the Starcraft 2 devs talked about using constrained Delaunay triangulation, and you can kind of see it when you look at the pathing mesh in the Starcraft 2 editor. However, very few of the generated triangles are legal, due to the number of constraints in the map layout. Still, I am assuming the Delaunay algorithm influenced the selection of points to triangulate, in order to get the best triangle coverage.

However, I can’t find any further info on how this might work. How does one start with constraints, points connected by fixed lines, and then "fill in" the remaining lines to generate triangles that are a best-effort attempt to not be overly stretched or narrow?

setting up constrained maximization

I am trying to enter the maximization problem with constraints below:

$ $ \begin{aligned} & \underset{x_0, x_1}{\text{max}} & & \Big[a(C-x)+(1-a)(z-y)\Big] \ & \text{s.t.} & & a u(x_1)+(1-a)u(y)-h \geq 0 \ & & & (a-b)(u(x)-u(y)) \geq 0 \end{aligned} $ $


Maximize[{a(C-x)+(1-a)(z-y), au(x)+(1-a)u(y)-h >= 0, (a-b)(u(x)-u(z))>=0}, {x,y}] 

Is this the correct way to enter the problem- especially the u(x) and u(y) terms?

What happens if you cast Animal Growth in constrained environments?

Let´s say a wizard casts Animal Growth (PHB, p. 198) on a lion thus changing the lion´s size form large to huge. – What happens if there isn´t enough space for the lion to grow into?

In the spell description it says:

If insufficient room is available for the desired growth, the creature attains the maximum possible size and may make a Strength check (using its increased Strength) to burst any enclosures in the process. If it fails, it is constrained without harm by the materials enclosing it—the spell cannot be used to crush a creature by increasing its size.

Now I have a couple of questions:

  1. I understand it correctly that the spell takes effect no matter whether or not there is enough space for the enlarged lion to fit in, right?
  2. If there is some enclosure (like e.g. a wall) which the lion fails to break and is then constrained without harm – what exactly does this mean as a consequence? Is the lion squeezed? (Follow up question: If it is squeezed will it have to move to a legal space immediately when it is it´s turn to act? And what if there is no such space for a huge lion anywhere on a packed battlefield?)
  3. What if the enclosure isn´t an object but a creature? Maybe there is an orc standing in the corner of the 15 feet square the huge lion is supposed to fill up. Do the lion and the orc contest the space? Do they make strength checks?

(You get the same wording in the descriptive texts of the spells Enlarge Person and Righteous Might (PHB p. 226 / 273), so my question also adresses these spells.)

Find vertices of a convex polytope, constrained by hyperplanes

I am looking for a algorithm that returns the vertices of a polytope if provided with the set of hyperplanes that confine it.

In my special case the polytope is construced by the following constrains on $ \mathbb{R}^d$

  • $ \left\| x \right\|$ = 1, $ x \in \mathbb{R}^d$
  • $ 0<a_i \leq x_i\leq b_i \leq1$ , where $ x_i$ represents the i-th component of $ x$

This always generates a convex polytope which is confined by hyperplanes, if I am not mistaken. I would like to have an algorithm that returns a set of points P that are the vertices of the polytope if provided with the set of $ a_i$ ‘s and $ b_i$ ‘s

There is a special case where there exists no $ x$ that satifies this condition. Idealy this would be picked up on.

Thank you!

algorithm for grouping elements (defined by time range, weight, location) based on time range overlap (groups are constrained by params)?

Let each element be an individual. Consider that an individual is defined such that each individual has a time range, weight, and location.

The goal is to group together individuals whose time ranges overlap while ensuring that, within the group, the sum of the weights of the individuals do not exceed a certain threshold. At the same time, it is desirable to minimize the total distance between the individuals in the group. As many individuals as necessary can be placed into a group as long as the weight constraint is met.

The goal is to have as many individuals grouped (that is at minimum paired) as possible while minimizing the total distance between individuals in groups.

For example, consider an example in the discrete time case where there are ten time intervals. [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]. The weight threshold is 4 and the location of the individuals are points on the 1-D line of integers. Say that we have the following individuals:

A: time range: [1, 2, 3] | weight: 1 | location: 1 B: time range: [2, 3, 4] | weight: 2 | location: 2 C: time range: [4, 5, 6] | weight: 2 | location: -3 D: time range: [4, 5, 6] | weight: 3 | location: -3 


  • A and C cannot be grouped because they do not have overlapping time ranges.
  • grouping together A and B gives is preferable to grouping together B and C because A and B are closer together.
  • C and D cannot be grouped because the sum of their weights exceed 4. Does any one have a recommended algorithm for solving a problem like this?

I’ve looked at the examples in (Studies in Computational Intelligence 666) Michael Mutingi, Charles Mbohwa (auth.) – Grouping Genetic Algorithms_ Advances and Applications-Springer International Publishing (2017). However, none of the grouping algorithms seem very fitting.

Showing Lagrange multiplier gives the unique solution of constrained, convex, quadratic loss function


I would like to specify the conditions the Lagrange multiplier uniquely identifies the global minimum. (Actually, the main motivation is the derivation of Ridge estimator in Statistics).

It may be a very elementary problem in convex optimization, but I feel I do not have reached sufficient understanding.


Say we would like to find the parameter $ \beta^\dagger \in \mathbb{R}^p$ that globally minimize the constrained loss function $ L1(\beta)$ and not constrained loss function $ L2(\beta)$ .

Show that the following two have the same, unique solution.

  1. $ \mathrm{argmin}_\beta L_1 (\beta) = \mathrm{argmin}_\beta \left( \beta^T A \beta – 2b^T \beta \right)$ subject to $ \beta^T \beta < \lambda$ .

  2. $ \mathrm{argmin}_\beta L_2 (\beta) = \mathrm{argmin}_\beta \left( \beta^T A \beta – 2 b^T \beta + \lambda \beta^T \beta\right)$

where $ A_{p \times p}$ : symmetric positive definite, $ b \in \mathbb{R}^p$ and $ \lambda >0$ .


Step1. The uniqueness of $ \mathrm{argmin}_\beta L_2 (\beta)$ .

In $ $ L_2(\beta) = \beta^T (A + \lambda I ) \beta – 2 b^T \beta$ $

we have $ A + \lambda I$ : symmetric positive definite, since $ \lambda I $ is symmetric positive definite. Thus we have $ \beta^T (A + \lambda I ) \beta$ is strictly convex, and thus $ L_2 (\beta) $ : strictly convex function of $ \beta$ .

From strict convexity and the fact that $ \frac{\partial L_2}{\partial \beta} = 2(A + \lambda I)\beta – 2b=0$ has the unique solution $ \beta = (A + \lambda I)^{-1} b$ , thus we have $ \mathrm{argmin}_\beta L_2 (\beta) = (A + \lambda I)^{-1} b$ is the global minimizer of $ L_2$

Step2. The equivalence of 1 and 2.

As far as I know, the Lagrange multiplier method is a strategy for finding the local maxima and minima of a function subject to equality constraints.


But I have following two problems

  • the problem has inequality constraints, so I’m not sure the method is applicable
  • I have noticed the quadratic function $ L_1(\beta) = \left( \beta^T A \beta – 2b^T \beta \right)$ also has the global minimum, since $ A$ is symmetric positive definite. But I’m not sure how I should relate this to $ \mathrm{argmin}_\beta \left( \beta^T A \beta – 2b^T \beta \right)$ subject to $ \beta^T \beta < \lambda$ has the unique and same solution as $ 2$ .

Any help will be appreciated.