## Linear order minimizing weighted distance from special element

Let’s say I have a set of beads, $$b_0,\dots,b_n$$, and let $$b_0$$ be the ‘special bead’. I want to lay out the beads on a string to minimize the total cost, defined as $$\sum_{i=1}^n w_i \cdot d(b_0, b_i)$$, where the weights $$w_i$$ are given, and the distance is computed as the number of beads between the two corresponding beads.

The minimum cost for every position of $$b_0$$ can be calculated in linear total time (plus $$n \log n$$ pre-processing time to sort the weights), by noting that the optimal solution is to pair up the beads on either side by increasing weight, with a residual ‘tail’ on the longer side, and working out how the cost changes as you shift $$b_0$$ along.

What if some of $$b_1,\dots,b_n$$ are fixed in given positions?

Obviously quadratic time is possible by solving each sub-problem independently, but is there an incremental algorithm akin to the case without any fixed beads?

## Do the minimum spanning trees of a weighted graph have the same number of edges with a given weight?

If a weighted graph $G$ has two different minimum spanning trees $T_1 = (V_1, E_1)$ and $T_2 = (V_2, E_2)$ , then is it true that for any edge $e$ in $E_1$ , the number of edges in $E_1$ with the same weight as $e$ (including $e$ itself) is the same as the number of edges in $E_2$ with the same weight as $e$ ? If the statement is true, then how can we prove it?

## Ratio of exponentially weighted Selberg integral

I’m interested in bounding the following ratio of integral: $$\frac{\int_{0 where $$\frac{k+1}{2} and $$w_i>0, i=1,…,k$$ are any positive numbers. This is related to Selberg integral. Does there exist an upper bound of the above ratio only depending on $$m,n,k$$ and independent of $$w_i$$?

## Edgeweight-Conditions for “Statistically Self-similar” Complete Weighted Graphs

Given a complete symmetric weighted graph with $$n$$ vertices, for such a graph there always exists a minimum spanning tree and, under the assumption of the uniqueness of that tree, the vertex degrees will have a specific (discrete) distribution.

Question

for which statistical properties of non-trivial distributions of edgelengths, per vertex and per the entire graph, do the MSTs of random complete symmetric graphs with those edge-lengths have the following property:

the distribution of vertex degrees in “secondary” MSTs induced by the vertices of degree $$k$$ in the “primary” MST is the same as in the primary MST

## Embedding of weighted sobolev space with exponential weights

In the book by Bensoussan and Lions, they introduce the weighted spaces with exponentially decaying weights to study elliptic equations with bounded coefficients on the whole space $$\mathbb{R}^n$$. They mentioned that there are classical regularity results based on these spaces.

For example, for each $$p\in [1,\infty)$$, the weighted $$L^p_\mu(\mathbb{R}^d)$$ space on $$\mathbb{R}^d$$ is defined to be the set of Lebesgue measurable functions such that $$f\omega_\mu(x)\in L^p(\mathbb{R}^n)$$, i.e., $$\|f\|_{L^p_\mu}=\int_{\mathbb{R}^d}|f|^p\omega^p_\mu(x)\,dx< \infty,$$ where $$\omega_\mu(x)=\exp(-\mu\sqrt{1+|x|^2})$$ for some $$\mu>0$$, and the weighted sobolev space $$W^{1,p}_\mu(\mathbb{R}^d)$$ is defined to be the space of functions such that $$u\omega_\mu\in L^p(\mathbb{R}^n)$$ and $$\partial_{x_i} u\omega_\mu\in L^p(\mathbb{R}^n)$$, where $$\partial_{x_i}$$ denotes the weak derivative in the distribution sense. Similarly we define the high-order sobolev space $$W^{2,p}_\mu(\mathbb{R}^d)$$ such that $$\partial_{x_ix_j}u\omega_\mu\in L^p$$ for all $$i,j$$.

I am interested in a reference on the embedding properties between spaces of different orders. For example, it is pointed out in the book that the injection $$W^{2,p}_\mu\hookrightarrow W^{1,p}_\nu \tag{1}$$ with $$\nu>\mu$$ is compact. Does it follow from the results for the classical sobolev space? Moreover, does the Gagliardo–Nirenberg–Sobolev inequality for the classical Sobolev space still hold? In particular, whether for $$1\le p, we have $$\|u\|_{L^{p^*}_\mu(\mathbf{R}^n)}\leq C \|Du\|_{L^{p}_\mu(\mathbf{R}^n)}.$$

for some $$p^*>p$$? How about we allow $$\nu$$ to be larger than $$\mu$$ as in (1)?

## Weighted random selection with removal

Imagine that I have a set of planets and these planets have a random number of random resources. There are resources that have more probability to belong to a planet than others (i.e., there are different levels of rarity).

Example of resources:

• Normal rock – 40% probability (0.4)
• Coal – 30% probability (0.3)
• Iron – 20% probability (0.2)
• Gold – 9% probability (0.09)
• Diamond – 1% probability (0.01)

I have already implemented the weighted random selection (using NodeJS):

var rand = function(min, max) {     return Math.floor(Math.random() * (max - min)) + min; };  var generateWeightedList = function(list, weight) {     var weighted_list = [];      // Loop over weights     for (var i = 0; i < weight.length; i++) {         var multiples = weight[i] * 100;          // Loop over the list of items         for (var j = 0; j < multiples; j++) {             weighted_list.push(list[i]);         }     }      return weighted_list; };  const resource_generation_weighted = [0.4, 0.3, 0.2, 0.09, 0.01]; const resource_generation_weighted_list= Random.generateWeightedList(Object.keys(Resources), resource_generation_weighted );  let generatesResource = resource_generation_weighted_list[Random.rand(0, resource_generation_weighted_list.length)]; 

The “Resources” can be the ones mentioned above.

Imagine that for one planet we have to choose randomly 2 of the resources mentioned. On a first iteration, the functions that I have work. After choosing randomly the first resource, I want to remove it from the possible choices, so it can’t be choosed again. But now, on a second iteration, I have to recalculate the probability of the other resources. Something similar to what is explained in this case here.

Is there some library that can handle this? Or some psedo-code? Or a good explanation about how this should be done? I tried to find something, but didn’t get lucky yet.

## Weighted Graph problem: edge deletion to form a tree and working with constraints

Say we have a connected graph with two individuals a fast robber and a slow police officer. The fast robber is trying to avoid the officer and given the graph has cycles the fast robber can perpetually avoid the officer. We can add roadblocks in order to restrict the movement of the robber, the installation of which costs the value on the edge. What is an algorithm I can implement in order to guarantee the officer can eventually catch the robber?

The first thing that comes to mind is to delete edges so that our graph becomes a tree. Where we could run a maximum spanning tree $$G$$ (minimum spanning tree but we want the tree with largest cumulative weight). Which will give us a subset of our graph $$T$$. All we need to do is to delete the edges in $$G\backslash T$$.

My issue with this is that we might not need to delete all those edges. say we have two graphs $$G_1$$ and $$G_2$$ with one node $$n_1$$ and $$n_2$$ which we link. Say we have our officer on $$(n_1 , n_2)$$ and the robber on $$G_1$$, we simply need to wait for the officer to get passed $$(n_1 , n_2)$$, block that edge, and find a maximum spanning tree on $$G_1$$ and never bother with $$G_2$$. I am not sure how to formalize such a reasoning.

Thank you for the hint and insight

## proof that multiple minimum spanning trees (MSTs) for a given edge- weighted graph have same edge weight multi sets

How do we prove this?

I thought we could use safe edge property and say if the edges were different, the safe edge property would choose only the minimum weight edge, but since there are multiple minimum edge weights edges, there are two possibilities of choosing an edge to form a minimum spanning tree?

Is there any better way to prove or is this good enough?

## Maximizing the weighted average value of a set of items by doubling the weight of a subset of items

Given a set of $n$ items, each represented by $t_i=(w_i,v_i)$ for $1 \le i \le n$ , the weighted average value of those items is defined as:  \frac{\sum_{i=1}^{n}v_iw_i}{\sum_{i=1}^nw_i} 

The goal is to find a subset of items to double their weight, such that the weighted average value is maximized.

For example, suppose you have the following set of items, where $t_i = (w_i,v_i)$ :

 t_1 = (12, 1100000)\ t_2 = (12, 1000000)\ t_3 = (12, 850000)\ t_4 = (10, 800000) \ t_5 = (8, 1200000)  The weighted average value is 981,481. The best solution is to double the weight of items 1 and 5, which leads to a new weighted average of 1,024,324.

I am trying to come up with an algorithm to find the best subset of items to double, and so far I’ve tried using bruteforce. For each item, you can choose to either double it, or leave it as it is. This means that there are a total of $2^n$ possibilities to explore, which means exponential complexity. I’ve also tried a greedy algorithm to pick the highest value-to-weight ratio items, and determine the best number of items to pick, however this solution is not always optimal.

I am wondering, what is the most efficient algorithm to find the best subset of items to double their weight?

## Perfect matching in complete, weighted graph

I’m trying to find pairs in a complete, weighted graph, similar to the one below (weights not shown).

For each possible pair there is a weight and I would like to find pairs for including all vertices, maximizing the weight of those pairs.

Many of the algorithms for finding maximum matchings are only concerned with finding them in bipartite graphs (e.g. Hungarian algorithm). In this case the graph is not bipartite. The only algorithm I found that seems to fit the problem here is Edmond’s Blossom algorithm and it’s (revised?) Blossom V variant.

As far as I understand the complexity of Blossom is polynomial ($$n^3$$) and so with bigger graphs complexity increases quickly. I currently don’t expect more than 100 vertices which should be manageable unless the $$n$$ in $$n^3$$ concerns the number of edges in the graph. In this case I might need to find a way to make the graph less dense.

My questions are:

1. Is Blossom/Blossom V the only algorithm that should be considered for this problem or did I miss any other contenders while digging into this?
2. Do you know if in these kinds of algorithms $$n$$ usually refers to the number of vertices or edges?
3. Since even with Blossom this might be computationally intensive I was wondering if someone can think of some smart steps to simplify the problem, maybe turning it into another problem that could be easier to solve.