Proof of a greedy algorithm used for a variation of bin-packing problem

we are given an array of weights W (All the wights are positive integers) and we need to put the weights inside bins. Each bin can hold a maximum of Max_val (Wi <= Max_val (0 <= i < W.size())). The variation is that the ordering of weights should not be changed (ie; Wi should be inside a bin before Wj is inserted for all i < j).

For this problem statement, intuitively we can see that a greedy approach of filling a bin till its maximum value is reached and creating a new bin for further weights will produce the minimum number of bins. I am unable to come up with a formal proof that the greedy solution is optimal. Any hints or guidelines would be great!

PS: Wi represents ith element of W array.

Greedy algorithm to divide objects into the lowest number of groups of a maximum size

I have n objects of independent size s, and need to group them so that the sum of the sizes of each group is smaller than a given maximum size, and the number of groups is the smallest possible.

I already tried a brute force algorithm that orders the objects in all possible permutations and then, based on their size and the group max size, divides every permutation in groups based on the order. This algorithm is very slow and therefore I want to find a greedy one, but being new to the topic I can’t think of a good way to start.

My idea would be to start building the first group from the largest object, and then adding more so that every time I add a new one, the difference between the max size and the size of all the objects in the group is minimized. Is this a good starting point?

N numbers, N/2 pairs. Minimizing the maximum sum of a pairing. Proving greedy algorithm

So say I have n numbers, where n is even. I want to pair the numbers such that the maximum sum of the pairs is minimized. For example -2, 3, 4, 5. The ideal pairing is (-2, 5), (3, 4), since its maximum sum is 7, and that is the minimal sum possible for a max sum in any pairing. The key to the algorithm is to sort the values from least to greatest. Then pair the least with the greatest, and so on, until you reach the center of the ordering.

Example: 3, -2, 4, 5

Algorithm sorts the values: -2 , 3, 4, 5

Then pairs first with last: (-2, 5)

Then pairs the next available first and last: (3, 4)

Terminates since no pairs left.

This is a greedy algorithm and I am trying to prove that it is always correct using a “greedy stays ahead” approach. My issue is that I am struggling to show that the algorithm’s maximum sum is always $ \leq$ optimal maximum sum. My intention was to suppose for contradiction that the optimal maximum sum is $ <$ the algorithm’s maximum sum. But I’m not sure how to find a contradiction. How would this proof go?

Selling Stocks to Maximize Profit Algorithm – Greedy

I am trying to find an efficient algorithm that takes the prices p1 . . . pn and the function values f (1) . . . f (x) and determines the best way to sell x shares by day n.

Problem set-up.

I’m given the market prices for this stock over the next n days, let these be p1, p2 …pn. I also know the function f(y) which predicts the effect of selling y shares on a single day: if you sell y shares, it permanently decreases the price of the stock by f(y) from that day onwards.

So, if I sell y1 shares on day 1, the price per share for the transaction is p1 −f(y1) and the corresponding profit for that day is y1 · (p1 − f(y1)). If I then sell y2 shares on day 2, the price per share for that transaction is p2 − f(y1) − f(y2). Note that the decreases in price accumulates.

Assume that p1 …pn decreases monotonically and f(1) …f(x) increases monotonically. (Prices are generally going down and it goes down even more if I sell more shares). I can also assume that no matter how I sell your stocks, it is impossible for me to reduce the price of the stock to 0 (or negative numbers). (meaning, pi is larger than the cumulative decrease in price from any possible sequences of sales in the first i days) In other words, the algorithm finds integers y1 . . . yn, where yi corresponds to the number of shares sold on day i, such that:

  • All x shares are sold: 􏰀ni = 1 yi = x.
  • The total profit is maximized.

I know some about the stereotypical “buy-sell stocks” algorithm, but this seems like a bit of a different beast. I think this might be a greedy algorithm that takes a graph. Anything will help 🙂

What do you call a greedy algorithm that solves a combinatorial problem by optimizing the best k>1 choices altogether?

Suppose you have a problem which goal is to find the permutation of some set $ S$ given in input that minimizes an objective function $ f$ (for example the Traveling Salesman problem).

A trivial algorithm $ E(S)$ that find the exact solution enumerates all the permutations and outputs the one that minimizes $ f$ . Its time complexity is $ O(n!)$ where $ n$ is the size of $ S$ .

A trivial greedy algorithm $ G(S)$ that finds an approximation of the solution is:

 out[0] = select a good starting item from S according to some heuristic h_1. S = S - {out[0]} for i=1 to n-1 do:     out[i] = select the next best element using some heuristic h_2     S = S - {out[i]} return out 

Where $ h_1$ and $ h_2$ are two heuristics. Its complexity in time is $ O(n^2)$ assuming that $ h_2$ runs in constant time.

Sometimes I mix the two techniques (enumeration and greedy) by selecting at each step the best $ k$ items (instead of the best one) and enumerating all their permutations to find the one that locally minimizes $ f$ . Then I choose the best $ k$ items among the remaining $ n-k$ items and so on.

Here is the pseudocode (assuming $ n$ is a multiple of $ k$ ):

 for i in 0 to n/k do:     X = select the best k items of S according to some heuristic h     S = S - X     out[i*k ... (i+1)*k-1] = E(X) return out 

Where $ E(X)$ is algorithm that find the exact solution applied on a subset $ X \subset S$ rather than on the whole $ S$ . This last algorithm finds an approximate solution and has a time complexity of $ O(\frac{n}{k}(n \log k + k! ))$ assuming that $ h$ can be computed in constant time. This complexity can be comparable to $ O(n^2)$ if $ k$ is small although according to my experience the performances can be way better than the greedy approach.

I don’t think I invented this kind of optimization technique: do you know its name? Can you please include some theoretical references?

I know for sure it is not beam search, because beam search never mixes the best $ k$ solutions found at each step.

Thank you.

Finding a Non-Adaptive Greedy Algorithm

I am trying to find a non-adaptive greedy algorithm for the following problem :

The input is a set of cards and an integer C. Each card has a digit d and a cost c. A solution is a subset of the cards in an order. The solution is valid if the sum of the costs of the cards is at most C. Consider the string D formed by concatenating the digits of the solution together with zeros added on the end so that each solution has n characters. The value of the solution which we are to maximize is the integer value and/or the lexicographical order of D.

For example, here is an input I = {<2,5>, <3,4>, <3,3>, …, <3,3>, <8,8>, <9,9>, <9,10>} and C = 14. The following are the D and total cost C’ for various valid solutions sorted from worst to best:

D = 833000000 —– C’ = 8 + 3 + 3 = 14 —- S = {<8,8>, <3,3>, <3,3>}

D = 930000000 —– C’ = 9 + 3 = 12 —- S = {<9,9> <3,3>}

D = 930000000 —– C’ = 9 + 4 = 13 —- S = {<9,9> <3,4>}

D = 930000000 —– C’ = 10 + 4 = 14 —- S = {<9,10> <3,4>}

I was thinking of an algorithm that first sort the input cards in descending order according to their digits and their costs then choose the card with largest digit as long as the addition of its cost and total cost C’ doesn’t exceed C. What do you think ?

Proving a Greedy Algorithm is Incorrect by Providing Counter Example and Coming up with another correct algorithm

I want to come up with a counter example that proves the following greedy algorithm doesn’t work and give an alternative correct algorithm. The problem is I have an array of numbers and I want to reach the last element of the array in the minimum number of steps. At each step, I can move to any element with the same value, move forward one, or move backward one. The greedy criterion is to move furthest to the right as much as possible. For example, if we have array {1,2,3,4,1,5}, the algorithm will start at 1 move to 1 before the 5 then moves to 5 with number of steps of 2.

An an example of input instance that proves the given greedy algorithm wrong might be {1,2,1,3,2} where the given algorithm crosses the array in 3 steps whereas there is an optimal solution of moving from 1 to the second 2 right to last 2 in two steps. Now, what is a correct algorithm for solving this problem ?

Proof by contradiction for greedy algorithms

I’m having some difficulty understanding/being convinced the technique used to prove a greedy algorithm is optimal for the fractional knapsack problem. A proof by contradiction is used. I’ve never been great at proofs, and maybe this will help me get on the track to becoming more comfortable with them.

Consider section 3.2, Analysis. The algorithm is very intuitive – choose the highest value/weight and only the last item in the knapsack may need be fractional. I don’t have any issues with that. Formally proving that this algorithm is the best thing out there I what I have an issue with.

(note proof is available in more detail:

The professor shows that our algorithm chooses items p_1…p_n in various amounts. ALG = (p_1, p_2, … , p_n). So we choose p_1 much of item 1, p_2 much of item2… all the way to p_n. Only the last item in the knapsack could be fractional. All good.

Now we assume that there exists a better algorithm, OPT. We say OPT makes the same choices as ALG up to a certain point OPT(q_1, q_2,…q_n). At some point, q_i, OPT makes a different choice that ALG. Why in the notes does the professor say q_i cannot be p_1. Surely OPT could take a different choice immediately. Anyway, that aside, I agree that OPT, like ALG will be putting as much of the items it can into the knapsack upto i. After i, OPT starts doing something else. The professor doens’t talk about this, but maybe OPT changes the ordering of the items it puts into the knapsack – that wouldn’t be a big deal as long as it chose all the same items and put the same amount of the very last item ALG put in the knapsack last.

However changing the order of items chosen wouldn’t make OPT better, just the same. Right? (professor doesn’t talk about this, at least not explicitly). For OPT to be better there must be some item, i it chooses less than ALG did of. And similarly some item j we take more of than ALG did, to keep the capacity the same. Another aside: haven’t we made the implicit assumption that we must use all of the capacity, W, to be optimal. Is this self evident or something? Intuitively I can see why, but formally I cannot.

Anyway we remove a little amount from qj and give it to qi, hence those equations involving epsilon. i.e. if we remove e units of weight of j then we can add (e * wj / wi) units of value. Fine.

Now why on earth is this called OPT’. Isn’t this still OPT? I’m confused here. Why did we bring OPT’ in there? Why do we need yet another modified version of ALG. I thought OPT (not OPT’) is where we modified qi and qj by changing how much of each we picked.

Thanks all!

Proving correctness and optimality of a greedy algorithm

Here is a (slightly abridged) problem from Kleinberg and Tardos:

Consider a complete balanced binary tree with $ n$ leaves where $ n$ is a power of two. Each edge $ e$ of the tree has an associated length $ \ell_e$ , which is a positive number. The distance from the root to a given leaf is the sum of the lengths of all the edges on the path from the root to the leaf.

Now, if all leaves do not have the same distance from the root, then the signals we send starting from the root will not reach the leaves at the same time, and this is a big problem. We want the leaves to be completely synchronized, and all to receive the signal at the same time. To make this happen, we will have to increase the lengths of certain edges, so that all root-to-leaf paths have the same length (we’re not able to shrink edge lengths). If we achieve this, then we say the tree has zero skew.

Give an algorithm that increases the lengths of certain edges so that the resulting tree has zero skew and the total edge length is as small as possible.

This problem was in the “Greedy Algorithms” chapter.

I know the solution to this problem is as follows:

Let the subtrees below the root be $ L$ and $ R$ . If the height of $ L$ is greater than $ R$ , then increase the edge length from the root to the right subtree by $ \verb!height(L) – height(R)!$ . Conversely, if the height of $ R$ is greater than the height of $ L$ , then increase the edge length from the root to $ L$ by $ \verb!height(R) – height(L)!$ . We repeat this procedure recursively on $ L$ and $ R$ .

I am having trouble proving the following:

  • $ 1)$ The resulting tree upon termination of this algorithm is a zero-skew tree (the length from the root to any leaf is the same).

  • $ 2)$ The sum of the edge lengths is minimized with this algorithm.

I have tried to prove these facts in many ways, such as induction and direct proof, but I am having a lot of difficulty doing so.

I would greatly appreciate some help in solving this problem.