## Greedy algorithm for vertex cover

Given a graph $$G(V, E)$$, consider the following algorithm:

1. Let $$d$$ be the minimum vertex degree of the graph (ignore vertices with degree 0, so that $$d\geq 1$$)
2. Let $$v$$ be one of the vertices with degree equal to $$d$$
3. Remove all vertices adjacent to $$v$$ and add them to the proposed vertex cover
4. Repeat from step 1. until in $$G$$ there are only vertices with degree $$0$$ (no edges in the graph)

At the end the removed vertices are a vertex cover of the given $$G(V, E)$$, but is it a minimum vertex cover? Is there an example where the algorithm does not find a minimum vertex cover?

## Why is the Greedy Algorithm for Online Scheduling on uniform machines NOT competitive?

Consider the Online Scheduling Problem with m machines and different speeds.

Each instance $$\sigma$$ consists of a sequence of jobs with different sizes $$j_i\in\mathbb{R^+}$$. At each step in time any algorithm gets to know the one new job of $$\sigma$$ which needs to be assigned to one machine. The goal is to minimize the makespan (by the end of the online sequence \sigma\$ ).

I want to find an instance such that the Greedy Algorithm, which assigns each new job to the machine which finishes it first, is only $$\Theta(\log m)$$-competitive.

Any ideas? I can’t find any articles regarding my problem.

## Greedy Probabilistic Algorithm for Exact Three Cover

I have a probabilistic greedy algorithm for Exact Three Cover. I doubt it’ll work on all inputs in polytime. Because the algorithm does not run $$2^n$$ time. I will assume that it works for some but not all inputs.

Our inputs are $$S$$ and $$B$$

$$S$$ is just a set of integers

$$B$$ is a list of 3-element {}s

## Algorithm

1. Input validation functions are used to ensure that sets of 3 are $$elements$$$$S$$.

2. A simple $$if~~statement$$ makes sure that $$|S|$$ % $$3$$ = $$0$$

3. I treat the sets like lists in my algorithm. So, I will sort all my sets from smallest to largest magnitudes (eg {3,2,1} now will be sorted to {1,2,3}

4. I will also sort my list of sets called $$B$$ in an ordering where I can find all {1,2,x}s with all other {1,2,x}s. (eg, sorted list {1,2,3},{1,2,4},{4,5,6},{4,5,9},{7,6,5} )

5. I will also genereate a new list of sets containing elements where a {1,2,x} only occurs one time in $$B$$.

6. Use brute force on small inputs and on both sides of the list $$B$$ up to $$|S|$$ / $$3$$ * $$2$$ sets. (eg. use brute force to check for exact covers on left and right side of the list B[0:length(s)//3*2] and reversed B[0:length(s)//3*2])

## Seed the PRNG with a Quantum Random Number Generator

for a in range(0, length(B)):     o = quantumrandom.randint(0, length(B))     random.seed(int(o))  # I will create a function to shuffle B later  def shuff(B, n):     for i in range(n-1,0,-1):         random.seed()         j = random.randint(0,i+1)         B[i],B[j] = B[j],B[i] 

## Define the number of times while loop will run

n = length(s)  # This is a large constant. No instances # are impractical to solve.  while_loop_steps = n*241*((n*241)-1)*((n*241)-2)//6 

## While loop

stop = 0 Potential_solution = [] opps = 0 failed_lists = 0 ss = s  while stop <= while_loop_steps:      opps = opps + 1     stop = stop + 1      shuff(B,length(B))          if length(Potential_solution) == length(ss)//3:         # break if Exact         # three cover is         # found.         OUTPUT YES         failed_lists = failed_lists + 1         HALT      # opps helps     # me see     # if I miss a correct     # list           if opps > length(B):         if failed_lists < 1:             s = set()             opps = 0               # Keep B[0]     # and append to     # end of list     # del B[0]     # to push >>     # in list.       B.append(B[0])     del [B[0]]     Potential_solution = []     s = set()          for l in B:         if not any(v in s for v in l):             Potential_solution.append(l)             s.update(l) 

Run a second while loop for new_list if Step 5 meets the condition of there being only ONE {1,2,x}s )eg. {7,6,5} shown in step 4

## Two Questions

How expensive would my algorithm be as an approximation for Three Cover?

And, what is the probability that my algorithm fails to find an Exact Three Cover when one exists?

## Greedy sequential/parallel task scheduling

We have N tasks that need to be scheduled for processing. Each task consists of two parts that need to executed in order. The first one is guarded by a mutex and therefore only one task can be executing this part at a time. The second part has no such constraint and any number of the tasks can be executing this at the same time. For task i we know how much time it needs to spend in each part, namely mi for the guarded part, and ai for the part that can be executed in parallel.

The problem is to find a permutation of the tasks such that the time needed to execute all of them is minimized.

My intuition says that this can be solved with a greedy algorithm, by scheduling the tasks in descending ai order.

For example given the tasks with:
m1 = 3, a1 = 9
m2 = 2, a2 = 7
m3 = 6, a3 = 10

The optimal solution is the permutation 3, 1, 2.

However, I have trouble proving that the greedy solution is optimal. Any ideas on how to do that?

## Proof of a greedy algorithm used for a variation of bin-packing problem

we are given an array of weights W (All the wights are positive integers) and we need to put the weights inside bins. Each bin can hold a maximum of Max_val (Wi <= Max_val (0 <= i < W.size())). The variation is that the ordering of weights should not be changed (ie; Wi should be inside a bin before Wj is inserted for all i < j).

For this problem statement, intuitively we can see that a greedy approach of filling a bin till its maximum value is reached and creating a new bin for further weights will produce the minimum number of bins. I am unable to come up with a formal proof that the greedy solution is optimal. Any hints or guidelines would be great!

PS: Wi represents ith element of W array.

## Greedy algorithm to divide objects into the lowest number of groups of a maximum size

I have n objects of independent size s, and need to group them so that the sum of the sizes of each group is smaller than a given maximum size, and the number of groups is the smallest possible.

I already tried a brute force algorithm that orders the objects in all possible permutations and then, based on their size and the group max size, divides every permutation in groups based on the order. This algorithm is very slow and therefore I want to find a greedy one, but being new to the topic I can’t think of a good way to start.

My idea would be to start building the first group from the largest object, and then adding more so that every time I add a new one, the difference between the max size and the size of all the objects in the group is minimized. Is this a good starting point?

## Variant of greedy algorithm for vertex cover

Does the following approximation algorithm for vertex cover also have an approximation ratio of 2? Why? Why not?

Input: $$G = (V,E)$$

1. Set $$C \gets \emptyset$$ and $$E’ \gets E$$.
2. while $$E’ \neq \emptyset$$ do:
• Pick any edge $$(u,v) \in E’$$.
• $$C \gets C \cup \{u\}$$.
• Remove from $$E’$$ all edges incident to $$u$$.
3. return $$C$$.

## N numbers, N/2 pairs. Minimizing the maximum sum of a pairing. Proving greedy algorithm

So say I have n numbers, where n is even. I want to pair the numbers such that the maximum sum of the pairs is minimized. For example -2, 3, 4, 5. The ideal pairing is (-2, 5), (3, 4), since its maximum sum is 7, and that is the minimal sum possible for a max sum in any pairing. The key to the algorithm is to sort the values from least to greatest. Then pair the least with the greatest, and so on, until you reach the center of the ordering.

Example: 3, -2, 4, 5

Algorithm sorts the values: -2 , 3, 4, 5

Then pairs first with last: (-2, 5)

Then pairs the next available first and last: (3, 4)

Terminates since no pairs left.

This is a greedy algorithm and I am trying to prove that it is always correct using a “greedy stays ahead” approach. My issue is that I am struggling to show that the algorithm’s maximum sum is always $$\leq$$ optimal maximum sum. My intention was to suppose for contradiction that the optimal maximum sum is $$<$$ the algorithm’s maximum sum. But I’m not sure how to find a contradiction. How would this proof go?

## I am trying to find an efficient algorithm that takes the prices p1 . . . pn and the function values f (1) . . . f (x) and determines the best way to sell x shares by day n.

Problem set-up.

I’m given the market prices for this stock over the next n days, let these be p1, p2 …pn. I also know the function f(y) which predicts the effect of selling y shares on a single day: if you sell y shares, it permanently decreases the price of the stock by f(y) from that day onwards.

So, if I sell y1 shares on day 1, the price per share for the transaction is p1 −f(y1) and the corresponding profit for that day is y1 · (p1 − f(y1)). If I then sell y2 shares on day 2, the price per share for that transaction is p2 − f(y1) − f(y2). Note that the decreases in price accumulates.

Assume that p1 …pn decreases monotonically and f(1) …f(x) increases monotonically. (Prices are generally going down and it goes down even more if I sell more shares). I can also assume that no matter how I sell your stocks, it is impossible for me to reduce the price of the stock to 0 (or negative numbers). (meaning, pi is larger than the cumulative decrease in price from any possible sequences of sales in the first i days) In other words, the algorithm finds integers y1 . . . yn, where yi corresponds to the number of shares sold on day i, such that:

• All x shares are sold: 􏰀ni = 1 yi = x.
• The total profit is maximized.

## What do you call a greedy algorithm that solves a combinatorial problem by optimizing the best k>1 choices altogether?

Suppose you have a problem which goal is to find the permutation of some set $$S$$ given in input that minimizes an objective function $$f$$ (for example the Traveling Salesman problem).

A trivial algorithm $$E(S)$$ that find the exact solution enumerates all the permutations and outputs the one that minimizes $$f$$. Its time complexity is $$O(n!)$$ where $$n$$ is the size of $$S$$.

A trivial greedy algorithm $$G(S)$$ that finds an approximation of the solution is:

 out[0] = select a good starting item from S according to some heuristic h_1. S = S - {out[0]} for i=1 to n-1 do:     out[i] = select the next best element using some heuristic h_2     S = S - {out[i]} return out

Where $$h_1$$ and $$h_2$$ are two heuristics. Its complexity in time is $$O(n^2)$$ assuming that $$h_2$$ runs in constant time.

Sometimes I mix the two techniques (enumeration and greedy) by selecting at each step the best $$k$$ items (instead of the best one) and enumerating all their permutations to find the one that locally minimizes $$f$$. Then I choose the best $$k$$ items among the remaining $$n-k$$ items and so on.

Here is the pseudocode (assuming $$n$$ is a multiple of $$k$$):

 for i in 0 to n/k do:     X = select the best k items of S according to some heuristic h     S = S - X     out[i*k ... (i+1)*k-1] = E(X) return out

Where $$E(X)$$ is algorithm that find the exact solution applied on a subset $$X \subset S$$ rather than on the whole $$S$$. This last algorithm finds an approximate solution and has a time complexity of $$O(\frac{n}{k}(n \log k + k! ))$$ assuming that $$h$$ can be computed in constant time. This complexity can be comparable to $$O(n^2)$$ if $$k$$ is small although according to my experience the performances can be way better than the greedy approach.

I don’t think I invented this kind of optimization technique: do you know its name? Can you please include some theoretical references?

I know for sure it is not beam search, because beam search never mixes the best $$k$$ solutions found at each step.

Thank you.