A problem with the greedy approach to finding a maximal matching

Suppose I have an undirected graph with four vertices $ a,b,c,d$ which are connected as in a simple path from $ a$ to $ d$ , i.e. the edge set $ \{(a,b), (b,c), (c,d)\}$ . Then I have seen the following proposed as a greedy algorithm to find a maximal matching here (page 2, middle of the page)

Maximal Matching (G, V, E): M = [] While (no more edges can be added)      Select an edge which does not have any vertex in common with edges in M      M.append(e) end while return M 

It seems that this algorithm is entirely dependent on the order chosen for which edge is chosen first. For instance in my example if you choose edge $ (b,c)$ first, then the you will have a matching that consists only of $ (b,c)$ .

Whereas if you choose $ (a,b)$ as your starting edge, then the next edge chosen will be $ (c,d)$ and you have a matching of cardinality 2.

Am I missing something, as this seems wrong? I have also seen this described as an algorithm for finding a maximal matching in the context of proving that the vertex cover approximation algorithm selects a vertex cover by choosing edges according to a maximal matching. Any insights appreciated.

How does ordering paradigm differs from subset paradigm in greedy method?

What is the difference between subset paradigm and ordering paradigm of greedy method approaches?

More precisely I didn’t understood how does the ordering paradigm differs from subset paradigm? Please differentiate using control abstraction for the both the approaches? Please elaborate with examples.

I have searched on the internet but I didn’t get anything that states how does ordering paradigm differs from subset paradigm? I will be very thankful to you.

Hitting Set Problem with non-minimal Greedy Algorithm

The Hitting Set Problem is defined as having a universal set $ \mathfrak{U}$ , and nonempty sets $ S_i \subseteq \mathfrak{U}$ for $ 1 \leq i \leq n$ , and finding a set $ \mathcal{H} \subset \mathfrak{U}$ such that $ |\mathcal{H} \cap S_i| \geq 1$ for all $ 1 \leq i \leq n$ .

We may ask for the minimal cardinality of $ \mathcal{H}$ , that is, what is the least number of elements needed to “Hit” every $ S_i$ ?

Further, we may use a greedy algorithm to ensure we find a hitting set. In this greedy algorithm, we set $ \mathcal{H} = \emptyset$ , and while we still have sets $ S_i$ that have not been hit, we add to $ \mathcal{H}$ an element whom appears in the most $ S_i$ that have not been hit.

My question is: What is an example of a Universe set $ \mathfrak{U}$ and subsets $ S_i$ , where $ 1 \leq i \leq n$ for some $ n \in \mathbb{N}$ , such that the greedy algorithm above does not find a minimal Hitting Set $ \mathcal{H} \subset \mathfrak{U}$ ?

For a longer (and probably clearer) description, and more info on the Hitting Set problem, see http://theory.stanford.edu/~virgi/cs267/lecture5.pdf, or Prove that Hitting Set is NP-Complete.

Hitting Set Problem with non-minimal Greedy Algorithm

The Hitting Set Problem is defined as having a universal set $ \mathfrak{U}$ , and nonempty sets $ S_i \subseteq \mathfrak{U}$ for $ 1 \leq i \leq n$ , and finding a set $ \mathcal{H} \subset \mathfrak{U}$ such that $ |\mathcal{H} \cap S_i| \geq 1$ for all $ 1 \leq i \leq n$ .

We may ask for the minimal cardinality of $ \mathcal{H}$ , that is, what is the least number of elements needed to “Hit” every $ S_i$ ?

Further, we may use a greedy algorithm to ensure we find a hitting set. In this greedy algorithm, we set $ \mathcal{H} = \emptyset$ , and while we still have sets $ S_i$ that have not been hit, we add to $ \mathcal{H}$ an element whom appears in the most $ S_i$ that have not been hit.

My question is: What is an example of a Universe set $ \mathfrak{U}$ and subsets $ S_i$ , where $ 1 \leq i \leq n$ for some $ n \in \mathbb{N}$ , such that the greedy algorithm above does not find a minimal Hitting Set $ \mathcal{H} \subset \mathfrak{U}$ ?

For a longer (and probably clearer) description, and more info on the Hitting Set problem, see http://theory.stanford.edu/~virgi/cs267/lecture5.pdf, or Prove that Hitting Set is NP-Complete.

Greedy Heuristics with an Altered Subset Sum/Partition Problem

Say we have a constant-time function that accepts some integer set. The function outputs True if we can split the integers into two subsets of an equal sum. If we can’t partition the integers given, the function outputs False.

Suppose we can use this function at any time, with any set of integers. Can we make a greedy algorithm using this function that will output one of the specific subsets that sum to half of a particular positive integer set?

The heart of the question: If we know at any time the boolean result of canPartition() on any set of integers, can we greedily use that to output the specific values of a partition?

Design a greedy algorithm by intermingling two sequences

I find myself solving problems for a test and the next problem I still can’t solve it. There are n ordered sequences, $ S_1$ to $ S_n$ . It is requested to intermingling them to obtain a single sequence, but doing the minimum possible work. The sequences are interspersed by two, the interleaving sequences $ A$ and $ B$ a long sequence is obtained $ | A | + | B |$ , and the work done is proportional to the length of the resulting sequence.

1 – Create an algorithm to perform this task, choosing pairs of sequences to be inserted in each step.

2 – What is the total cost of your algorithm, in terms of the lengths of the original sequences?

3 – Show that your algorithm delivers an optimal sequence.

I would really appreciate your help. Thanks in advance.

Prove that the greedy algorithm to remove k digits from a n-digit positive integer is optimal

Given a positive n-digit integer, such as 1214532 (n=7), remove k digits (for example k=4) such that the resulting integer is the smallest one.

A greedy algorithm for this would keep removing digits such that the resulting integer is the smallest. For the above example:

Step 1: Remove 2 => 114532 Step 2: Remove 5 => 11432 Step 3: Remove 4 => 1132 Step 4: Remove 3 => 112 

Can you prove that this algorithm is optimal (i.e. the final integer is the smallest possible)? Or if it is not, show a counterexample?

Thanks!

Prove Canonical Coin Greedy Algorithm

I am trying to prove the following:

Show that for the following coin system S, the following greedy algorithm gives the optimal solution: Select the largest possible coin at each step until the amount of money has been obtained for any given value of money. S = {1,2,5,10,20,50,100,200}.

I do not know where to begin with this proof. I am thinking that mathematical induction could be used, but I’m not sure how I would perform that proof. This problem is very easy to prove for particular amounts of money, but I don’t have a clue how to prove it for any amount of money.

Proof of correctness for greedy algorithm

I have the greedy code below for which I a trying to prove the correctness. There can be multiple optimal answer for this problem.

$ D$ is the dictionary.

    pq = Q.PriorityQueue()     X = {}     for x in D:         X[x] = 1         pq.put([fn(X, D, R, x), x])      for x in range(0, M):         tp = pq.get()         k = tp[1]         X[k] += 1         pq.put([fn(X, D, R, k), k])      for x in X:         X[x] -= 1 

Because it is hard to prove with final result i.e., $ X = \{k1:v1, k2:v2, v3:v3\}$ so what have I thought is to open up $ X$ . The reason I am opening up $ X$ is to eliminate the multiple solutions because lets say at he last iteration we have two keys $ k_1, k_2$ having same value then in that case we can have 2 solutions either $ k_1$ will get the allocation or $ k_2$ but if we only consider the value without key as our goal is to only minimize the total value I opened us $ X$ for values only so we don’t have mulitiple solution.

Now $ X = (x_1, x_2, \dots , x_m)$ where $ M$ is the total number of iterations and $ x_i$ is the value from the $ fn$ function, at each iteration it is picking a value and due to PriorityQueue it is guaranteed that $ x_i < x_j$ $ \forall i,j \in M$ now lets say we have an optimal solution $ O = (o_1, o_2, \dots, o_m)$ and $ k$ is the index where these solutions of $ O$ and $ X$ differs because of the greedy nature of $ X$ it is guaranteed that $ x_k < o_k$ now by replacing $ o_k$ to $ x_k$ we can generate even more minimal solution, which contradicts our assumption of $ O$ being optimal,so now we can claim that $ O=X$ .

for $ feasibility$ because at each iteration we are choosing $ 1$ value it is guaranteed that after $ M$ iteration we will have a set of $ M$ elements.

I just want to confirm that is this proof write or it need some correction.