## Efficient algorithm for this combinatorial problem [closed]

$$\newcommand{\argmin}{\mathop{\mathrm{argmin}}\limits}$$

I am working on a combinatorial optimization problem and I need to figure out a way to solve the following equation. It naturally popped up in a method I chose to use in my assignment I was working on.

Given a fixed set $$\Theta$$ with each element $$\in (0,1)$$ and total $$N$$ elements ($$N$$ is about 25), I need to find a permutation of elements in $$\Theta$$ such that $$\vec K = \argmin_{\vec k = Permutation(\Theta)} \sum_{i=1}^N t_i D(\mu_i||k_i)$$ where $$\vec t, \vec \mu$$ are given vectors of length $$N$$ and $$D(p||q)$$ is the KL Divergence of the bernoulli distributions with parameters $$p$$ and $$q$$ respectively. Further, all the $$N$$ elements of $$\vec t$$ sum to 1 and $$\vec \mu$$ has all elements in $$[0,1]$$.

It is just impossible to go through all $$N!$$ permutations. A greedy type of algorithm which does not give exact $$\vec K$$ would also be acceptable to me if there is no other apparent method. Please let me know how to proceed!

## Comments on “Which combinatorial problem is similar to this problem?”

Regarding the post Which combinatorial problem is similar to this problem?, it is quite hard to believe that, in the absence of information on the cost function, there is no algorithm better than exhaustive search to tackle this apparently easy-to-understand problem. I surfed the Internet and found something called the Weighted Constraint Satisfaction Problem, and I wonder whether this can be of any use in this case.

Any advise would be much appreciated.

## Combinatorial Problem similar in nature to a special version of max weighted matching problem

I have a problem and want to know if there is any combinatorial optimization that is similar in nature to this problem or how to solve this special version of the max weight matching problem.

I have a general graph $$G(\mathcal{V},\mathcal{E},\mathcal{W})$$. I want to find a maximum weight matching of the graph $$G$$ that must cover a certain subset of vertices and has a specific size. For example, if I have a graph with eight vertices, I want to find a max weighted matching that must cover the subset of vertices $$\mathcal{V}’=\{1,2,3\}$$ and the size of the matching is $$\lceil{|\mathcal{V}’|/2}\rceil$$. So one more vertex needs to be chosen that maximizes the weighted matching. How to find the optimal solution in polynomial time if possible?

## finding the combinatorial solutions of series and parallel nodes

I have n nodes, and I want to find the (non duplicate) number of possible ways in which these nodes can be combined in series and parallel, and also enumerate all the solutions. For example, for n=3, there are 19 possible combinations.

 0 (0, 1, 2)  1 (0, 2, 1)  2 (1, 2, 0)  3 (1, 0, 2)  4 (2, 0, 1)  5 (2, 1, 0)  6 [0, 1, 2]  7 [0, (1, 2)]  8 [0, (2, 1)]  9 (0, [1, 2]) 10 ([1, 2], 0) 11 [1, (0, 2)] 12 [1, (2, 0)] 13 (1, [0, 2]) 14 ([0, 2], 1) 15 [2, (0, 1)] 16 [2, (1, 0)] 17 (2, [0, 1]) 18 ([0, 1], 2) 

In the notation above, a series combination is denoted by (..) and a parallel combination is denoted by [..]. Duplicates are removed, for example [0,1,2] is the same as [1,2,0] since all of them are happening in parallel so the order does not matter here.

Can you give me an algorithm for this, or if any such algorithm already exists, then point me to it?

(I tried googling for a solution, but did not hit any relevant answer, maybe I was entering the wrong keywords.)

Note: for a sequential-only solution, the answer is easy, it is n!, and the enumeration of the solutions is also easy. But when parallelism (especially non duplicates) is added to the problem, it gets very complex.

## How to consider combinatorial optimization problem with multiple objectives?

I am considering a combinatorial optimization problem with two objectives. The two objectives have a trade-off between each other which means if I minimized the first objective alone it gives the worst solution to the other one and vise versa. How I should start tackling such problems and if anyone can recommend a famous combinatorial problem has the same nature I appreciate.

## help with proving combinatorial identity [migrated]

seems hard to prove the next identity. i think the direction is through combinatorics reasons. thanks.

## What do you call a greedy algorithm that solves a combinatorial problem by optimizing the best k>1 choices altogether?

Suppose you have a problem which goal is to find the permutation of some set $$S$$ given in input that minimizes an objective function $$f$$ (for example the Traveling Salesman problem).

A trivial algorithm $$E(S)$$ that find the exact solution enumerates all the permutations and outputs the one that minimizes $$f$$. Its time complexity is $$O(n!)$$ where $$n$$ is the size of $$S$$.

A trivial greedy algorithm $$G(S)$$ that finds an approximation of the solution is:

 out[0] = select a good starting item from S according to some heuristic h_1. S = S - {out[0]} for i=1 to n-1 do:     out[i] = select the next best element using some heuristic h_2     S = S - {out[i]} return out

Where $$h_1$$ and $$h_2$$ are two heuristics. Its complexity in time is $$O(n^2)$$ assuming that $$h_2$$ runs in constant time.

Sometimes I mix the two techniques (enumeration and greedy) by selecting at each step the best $$k$$ items (instead of the best one) and enumerating all their permutations to find the one that locally minimizes $$f$$. Then I choose the best $$k$$ items among the remaining $$n-k$$ items and so on.

Here is the pseudocode (assuming $$n$$ is a multiple of $$k$$):

 for i in 0 to n/k do:     X = select the best k items of S according to some heuristic h     S = S - X     out[i*k ... (i+1)*k-1] = E(X) return out

Where $$E(X)$$ is algorithm that find the exact solution applied on a subset $$X \subset S$$ rather than on the whole $$S$$. This last algorithm finds an approximate solution and has a time complexity of $$O(\frac{n}{k}(n \log k + k! ))$$ assuming that $$h$$ can be computed in constant time. This complexity can be comparable to $$O(n^2)$$ if $$k$$ is small although according to my experience the performances can be way better than the greedy approach.

I don’t think I invented this kind of optimization technique: do you know its name? Can you please include some theoretical references?

I know for sure it is not beam search, because beam search never mixes the best $$k$$ solutions found at each step.

Thank you.

## What is a good method for modelling combinatorial tree (sport competition results)?

Probably newbie question here, please point me out to relevant theories/tutorials if needed :

• let say I want to evaluate the probabilities of the final rankings for a sport competition
• the competition involved 8 teams, but I can simplify the problem to 4 contestants (say A – B – C – D)
• the competition is splitted into n journeys during which every team faces another team (and only one). So with 4 contestants, I have 2 matches per journey
• at the end of a match, 5 different points attributions are possible (depending on the score)

After one journey, there are 30 different possible combinations in terms of team’s points. So the model looks like a tree with a journey at each level.

Even if I simplify the situation to 2 journeys left, I can’t think of a elegant way to implement this problem (in Python for example) rather than “manually” creating the tree with the 30 combinations at each level and counting the leaves ?

I’m not familiar with this kind of problem so I’m not sure about the path forward.

## Which combinatorial problem is similar to this problem?

I have $$n$$ clients and for each client I have different options to choose from, for example, $$C= \{C_1,C_2 \}$$. For each combination of $$n$$ options, there is a cost. I want to choose the best combination that minimizes the cost without calculating the cost for each option one by one and then choose the best one and I do not have any constraints. Which combinatorial problem is similar to this one?

For example, If I have $$n=3$$ and $$C =\{1,2\}$$, I have set of triples $$(x_1,x_2,x_3)$$ as $$\mathcal{M}= \{(1,2,1),(1,2,2),(1,1,1),(1,1,2),(2,1,1),(2,1,2),(2,2,1),(2,2,2)\}$$ and equation $$cost=x_1+x_2+2x_3$$ I want to find the triple that minimizes the cost without calculating one by one. The cost function can be any function not nessesary linear.

## (Leetcode) Combinatorial Sum – How to generate solution set from number of solution sets?

The following question is taken from Leetcode entitled ‘Combination Sum’

Given a set of candidate numbers (candidates) (without duplicates) and a target number (target), find all unique combinations in candidates where the candidate numbers sums to target.

The same repeated number may be chosen from candidates unlimited number of times.

Note:

1. All numbers (including target) will be positive integers.
2. The solution set must not contain duplicate combinations.

Example 1:

Input: candidates = [2,3,6,7], target = 7, A solution set is: [ [7], [2,2,3] ]

Example 2:

Input: candidates = [2,3,5], target = 8, A solution set is: [ [2,2,2,2], [2,3,3], [3,5] ]

To solve this problem, I applied dynamic programming, particularly bottom up 2D tabulation approach. The method is quite similar to 0/1 knapsack problem, that is, whether we want to use an element in candidates or not.

The following is my code:

class Solution:     def combinationSum(self, candidates: List[int], target: int) -> List[List[int]]:         if not len(candidates):             return 0         dp = [ [1] + [0]*target for _ in range(len(candidates) + 1)]         for row in range(1, len(candidates) + 1):             for col in range(1, target+1):                 dp[row][col] += dp[row - 1][col]                 if col - candidates[row-1] >= 0:                     dp[row][col] += dp[row][col - candidates[row-1]]         print(dp[-1][-1]) 

However, my codes above do not give solution set. Instead, it gives the number of elements in solution set.

I attempted to generate solution set from my codes above but to no avail. Can anyone help me?