What is an algorithm for minimizing the standard deviation of m sums summed from n summands? [with attempt]

I have m bins (sums) and n summands. Each summand goes into a bin. In order to minimize the standard deviation, I have a greedy algorithm that appears to accomplish this. I am not sure of the name, but would like to know more. All m bins must have a sum greater than zero at the end of the algorithm.

It seems simple:

sort the summands from highest to lowest.

for each summand in the summands: find the first available bin with minimum sum and place it in the bin

I haven’t proved anything about it, but I’ve come up with a few test data sets and it appears to work.

Minimizing cost of a given sequence by partitoning [closed]

Given a sequence of positive integers of size N(let) divide it into at most K(K > N/C) disjoint parts/subsequences in order to minimize the "cost" of the entire sequence.

Partitions cannot overlap, for example [1,2,3,4,5] can be divided into [1,2], [3,4] and [5] but not [1,3] and [2,4,5].

The cost of a subsequence is computed as the number of repeated integers in it. The cost of the entire sequence is computed as the sum of costs of all the subsequences and a fixed positive integer cost C times the number of partitions/divisions of the original sequence.

How should I go about determining the position and number of partitions to minimize the total cost?

Some more examples:

The given list = [1,2,3,1] Without any partitions, its cost will be 2 + C, as 1 occurs two times and the original sequence is counted as one partition.

[1,1,2,1,2] Without any partitions, its cost will be 5, as 1 occurs three times and 2 occurs two times. If we divided the subsequence like so [1,1,2],[1,2] then the cost becomes 2 + 2*C, where C is the cost of partitioning.

I have actually solved the problem for the case of C = 1, but am having problems generalizing it to higher values of C.

For C = 1 it makes sense to partition the sequence while traversing it from one direction as soon as a repetition occurs as the cost of a single repetition is 2 whereas the cost of partitioning is 1.

I’m trying to solve it in nlog(n) complexity ideally or at most a fast n^2.

Minimizing the sum of differences in a pair

You have two arrays, a and b

Both contain n elements, all positive and distinct.

you have to create a pair, by taking one number from each array, such that the sum of the differences of the pairs are minimized.

for example, a = [1, 2, 3], b = [4, 5, 6]

one possible set of pairs = (1, 4), (2, 5), (3, 6)

sum of differences = |1 – 4| + |2 – 5| + |3 – 6| = 9

And now the sum of the differences has to be as small as possible

Desing an algorithm that those this is O(nlogn) time

So far, I have sorted both these arrays. I am unsure how to proceed now

Minimizing computation time

I’m having issues with reducing the computation time of a NMinimize function. I’ve tried setting the WorkingPrecision and N and adding constraints to the variable in NMinimize but more or less the computation time is way too long.

 ClearAll["`*"] $  Assumptions = {HeavisideTheta[0] == 1, HeavisideTheta[0.] == 1,    0 < j1 <= 5, 0 < j2 <= 5, 0 < j3 <= 5, 0 < j4 <= 5, 0 < t1,    t1 <= t2, t2 <= t3, t3 <= t4, t4 <= t5, t5 <= t6, t6 <= t7} eps = 0.05 wn = 2 Pi 5  func[q_?NumericQ] :=   Module[{vmax, sol, j, a, v, x, constraint1, constraint2, constraint3,     t1, t2, t3, t4, t5, t6, t7, yofs, yoft, j1, j2, j3, j4, temp500},    j[t_] =     j1 (UnitStep[t] - UnitStep[t - t1]) -      j2 (UnitStep[t - t2] - UnitStep[t - t3]) -      j3 (UnitStep[t - t4] - UnitStep[t - t5]) +      j4 (UnitStep[t - t6] - UnitStep[t - t7]);   a[t_] = Integrate[j[t], t];   v[t_] = Integrate[a[t], t];   x[t_] = Integrate[v[t], t];   constraint1 =     FullSimplify[     a[t7], {0 < j1 <= 5, 0 < j2 <= 5, 0 < j3 <= 5, 0 < j4 <= 5,       0 < t1, t1 <= t2, t2 <= t3, t3 <= t4, t4 <= t5, t5 <= t6,       t6 <= t7}];   constraint2 =     FullSimplify[     v[t7], {0 < j1 <= 5, 0 < j2 <= 5, 0 < j3 <= 5, 0 < j4 <= 5,       0 < t1, t1 <= t2, t2 <= t3, t3 <= t4, t4 <= t5, t5 <= t6,       t6 <= t7}];   constraint3 =     FullSimplify[     x[t7], {0 < j1 <= 5, 0 < j2 <= 5, 0 < j3 <= 5, 0 < j4 <= 5,       0 < t1, t1 <= t2, t2 <= t3, t3 <= t4, t4 <= t5, t5 <= t6,       t6 <= t7}];   vmax = (j1 t1 t1 + (t3 - t2) j1 t1)/2 + (t2 - t1) j1 t1;    sol = NMinimize[{t7, {t1 j1 - j2 (t3 - t2) == 0,        j1 t1 <= 0.7, -j3 (t5 - t4) >= -0.7,        j1 t1 (t2 + t3 - t1)/2 <= 0.14, constraint1 == 0,        constraint2 == 0, constraint3 == 0.05, 0 < j1 <= 5, 0 < j2 <= 5,        0 < j3 <= 5, 0 < j4 <= 5, 0 < t1, t1 <= t2, t2 <= t3, t3 <= t4,        t4 <= t5, t5 <= t6, t6 <= t7, t7 < 2}}, {j1, j2, j3, j4, t1,       t2, t3, t4, t5, t6, t7}];   {j1, j2, j3, j4, t1, t2, t3, t4, t5, t6,      t7} = {j1, j2, j3, j4, t1, t2, t3, t4, t5, t6, t7} /. sol[[2]];   Clear[j3, j4, t5, t6, t7, t4];   j3 = q;   j4 = q;   {t4, t5, t6, t7} = {t4, t5, t6, t7} /.      NMinimize[{t7, {constraint1 == 0, constraint2 == 0,          constraint3 == 0.05, -j3 (t5 - t4) >= -0.7, t5 > t4, t6 >= t5,          t7 > t6, t4 >= t3}}, {t4, t5, t6, t7}][[2]];    yofs = -LaplaceTransform[a[t], t, s]/(s^2 + 2 eps wn s + wn^2);   yoft = InverseLaplaceTransform[yofs, s, t];   NMinimize[{N[Re[FullSimplify[yoft, t > t7]]], t > t7, t < 1.5 t7},      t, WorkingPrecision -> 9][[1]] (*Column[{NMinimize[{Re[FullSimplify[yoft, t > t7]], t > t7}, t][[1]],    Plot[Re[yoft], {t, t7, 1.5 t7}]}*)]    ] (*func[3]   func[4] *) DiscretePlot[func[q], {q, 3, 5, 0.01}]   

When I make the output of the function as the one in comment and call 2 individual functions the results are what I want(takes around 3 minutes to compute 2 outputs) but when I run a discrete plot, 200 calls, basically would take 5 hours which is too much. is there a way to reduce this. Using N or using constraints or using WorkingPrecision isn’t changing the computation time either.

N numbers, N/2 pairs. Minimizing the maximum sum of a pairing. Proving greedy algorithm

So say I have n numbers, where n is even. I want to pair the numbers such that the maximum sum of the pairs is minimized. For example -2, 3, 4, 5. The ideal pairing is (-2, 5), (3, 4), since its maximum sum is 7, and that is the minimal sum possible for a max sum in any pairing. The key to the algorithm is to sort the values from least to greatest. Then pair the least with the greatest, and so on, until you reach the center of the ordering.

Example: 3, -2, 4, 5

Algorithm sorts the values: -2 , 3, 4, 5

Then pairs first with last: (-2, 5)

Then pairs the next available first and last: (3, 4)

Terminates since no pairs left.

This is a greedy algorithm and I am trying to prove that it is always correct using a “greedy stays ahead” approach. My issue is that I am struggling to show that the algorithm’s maximum sum is always $ \leq$ optimal maximum sum. My intention was to suppose for contradiction that the optimal maximum sum is $ <$ the algorithm’s maximum sum. But I’m not sure how to find a contradiction. How would this proof go?

How to maximize f while minimizing g at the same time?

Lately, I have been dealing with a problem that I didn’t know how to name it to solve it properly.

The problem is as follow: lets a assume that we have a set of element A. And, we have two function f and g, where for any sub-set B \in A, where |B|< k, k is a constraint:

  1. f(B) : estiamtes the gain obtainted by the set B.
  2. g(B) : estiamtes the lost obtainted by the set B.

In our problem we have two strategies S1, S2 which depends on the circumstances of the environment

  1. S1: selelects a set B that maximize the gain
  2. S2: selelects a set B that minimize the lost

my strategy is a hybride strategy, which is selecting a sets B1 and B2 where |B1|+|B2|

NT: given that there are several circumstances some times S1 works more effecientlly , and some cases S2 works better

is ther anyone who knows what type of problem? any documentation about it ? since it is a NP-hard problem, is there way to find an approximation with in the optimal solution?

Algorithm to position samples minimizing error

We have a 1D function f(x). We would like to approximate this function with a polyline of n points. How can we find the x position for these points so the polyline approximates f(x) with low error (not necessarily optimal but if it was, then even better)?

What algorithms are available to solve this problem? What are their trade-offs?

Example: the following two polylines have the same number of points but the second one approximates better the curve.

enter image description here

Minimizing the distance using a subtree problem

We have a binary tree with n nodes and a number k which signifies the number of nodes on a subtree. What is the optimal algorithm to select a subtree consisting of k nodes, that minimizes the maximum distance of a node (of our tree) from an ancestor of his that belongs in the subtree we have chosen. By distance we mean the steps we have to take to get to the ancestor, if every edge has a cost of 1. For this problem we always have to take the root of the binary tree to belong in the subtree.

I have figured out, that I have to use Dynamic Programming to solve this problem, but I can’t find the recursive function. I also think, to optimize the complexity, the recursive function should start from the bottom of the tree, to work its way on the top.

Possible options of minimizing duplicate comments?

Are there any solutions that combine or group similar comments? Any plugins, 3rd parties or anything that takes community comments and compress what has been already said?

Barring that, what would it takes to create something that consolidate comments with the same messages (under youtube video or Reddit post, for example)?

Minimizing the iterative sum of pairs of numbers in a list

Given the tuple (list, value):

$ $ \left(\left[x_1, x_2, \cdots x_n\right], y\right)$ $

You may choose two adjacent values in the list to modify the tuple as:

$ $ \left(\left[x_1, x_2, \cdots x_{i-1}, (x_i + x_{i+1}), x_{i+2} \cdots x_n\right], y + x_{i} + x_{i+1}\right)$ $

Iterate until:

$ $ \left(\left[\sum_i x_i\right], y + z\right)$ $

What is the optimal set of choices that minimizes $ z$ ?

Intuitively, you never want to operate on the largest number in the list. But the largest number in this list changes as you add values together. In other words, the optimal solution is not necessarily obtained by an optimal solution of a sub-problem.

A greedy solution would start by taking the smallest number in this list and adding it to the smaller of its adjacent numbers. This solution, while close, is not equivalent to the value returned by brute force search. This points to the fact the some locally optimal step is not globally optimal, which could be connected to the fact that the largest element of the list changes as values are added together.