Speeding up the Rummikub algorithm – explanation required

Regarding this question: Rummikub algorithm.

I was reading the first part of the solution in the posted answer (specifically, when there are no jokers involved, all tiles are distinct and only four colours are involved). Then, I reached the part in which the says that the algorithm runs in $ O(ABCD(A+B+C+D))$ time, which is easy to determine why.

However, he the goes on to saying that we can speed up the algorithm so as to run in $ O(ABCD)$ time by changing "the recurrence to ensure this occurs only once while maintaining correctness, which leads to $ O(1)$ time for every ‘cell’ in the DP-table".

My problem is: I do not see how this can be done. I have tried playing around a bit with the recurrence, but I do not see how it can be modified, or what else we should keep track of so that we can speed up the time.

Efficient algorithm for this combinatorial problem [closed]

$ \newcommand{\argmin}{\mathop{\mathrm{argmin}}\limits}$

I am working on a combinatorial optimization problem and I need to figure out a way to solve the following equation. It naturally popped up in a method I chose to use in my assignment I was working on.

Given a fixed set $ \Theta$ with each element $ \in (0,1)$ and total $ N$ elements ($ N$ is about 25), I need to find a permutation of elements in $ \Theta$ such that $ $ \vec K = \argmin_{\vec k = Permutation(\Theta)} \sum_{i=1}^N t_i D(\mu_i||k_i) $ $ where $ \vec t, \vec \mu$ are given vectors of length $ N$ and $ D(p||q)$ is the KL Divergence of the bernoulli distributions with parameters $ p$ and $ q$ respectively. Further, all the $ N$ elements of $ \vec t$ sum to 1 and $ \vec \mu$ has all elements in $ [0,1]$ .

It is just impossible to go through all $ N!$ permutations. A greedy type of algorithm which does not give exact $ \vec K$ would also be acceptable to me if there is no other apparent method. Please let me know how to proceed!

Why do agents always employ the same algorithm when playing a congestion game?

I’ve been conducting research into congestion games and have come across many papers that study the effects on the outcome of a game played by agents employing a particular algorithm e.g. seeing how quickly Nash equilibrium is approached when using a modified version of fictitious play.

Is there any particular reason as to why there hasn’t been any research conducted that looks into agents using different algorithms playing a single congestion game? For example, agents who uses fictitious play playing alongside agents who use a q-learning algorithm.

Problem with the algorithm

I am trying to execute the following algorithm shown in the image. I am trying to get the table shown in the image:

Backtracking Algorithm

1st Iteration L2: 1:CS=A, SL = A, NSL = A L3: while NSL!=[]: true L4: L6:no children: false L17:NSL =BCDA L18:CS:=B L19:SL=BA L20, L21 2nd Iteration L3:While NSL (true) L4: L6:no children: false L17: NSL=EFBCDA L18: CS:=E L19:SL:= EBA L20, L21 3rd Iteration L3:while NSL (true) L4: L6:no children: false L17:NSL= HIEFBCDA L18: CS:= H L19:SL:=HEBA L20, L21

At this point its fine but when there are no more children of current node, it has to backtrack, so it should execute the while loop, at that point I am losing the track: L3:while NSL(true) L4: L6:no children: true L7:begin L8:while SL is not empty (true) and CS:=H L9: DE=H L10:SL=EBA L11:NSL=IEFBCDA L12:CS=I L14:SL= IEBA

Now it should keep traversing the while loop but I am having problem with this. Somebody please correct this algorithm or guide me a better backtracking algorithm which has the contents of table.

Zulfi.

What is an algorithm for minimizing the standard deviation of m sums summed from n summands? [with attempt]

I have m bins (sums) and n summands. Each summand goes into a bin. In order to minimize the standard deviation, I have a greedy algorithm that appears to accomplish this. I am not sure of the name, but would like to know more. All m bins must have a sum greater than zero at the end of the algorithm.

It seems simple:

sort the summands from highest to lowest.

for each summand in the summands: find the first available bin with minimum sum and place it in the bin

I haven’t proved anything about it, but I’ve come up with a few test data sets and it appears to work.

Semi streaming algorithm for 2 vertices connectivity

Let $ G=(V,E)$ be an undirected graph. given a pair of vertices $ s,t \in V$ , how can we construct a semi-streaming algorithm which determines is $ s$ and $ v$ are connected? Is there any way to construct such an algorithm which scans the input stream only once?

Note that a semi-streaming algorithm is presented an arbitrary order of the edges of $ G$ as a stream, and the algorithm can only access this input sequentially in the order it is given; it might process the input stream several times. The algorithm has a working memory consisting of $ O(nâ‹…polylogn)$ .

What is the name for this load balancing algorithm?

While working on a practice problem I "organically" came up with a method that does well at solving my load balancing problem. I do not know what the official name is for it, but I would like to read more on it, I think maybe it is a greedy algorithm similar to the least connections algorithm.

Given m resources and n consumers where m << n.

The object is to balance the n consumers on the m resources such that the resources are equally utilized.

At each balancing step I do the following:

  1. sort resources by utilization ascending.

  2. sort consumer by consumption descending.

    While there are unprocessed consumers, place the greediest consumer on the least utilized resource. Repeat with each next greediest and each next least utilized. When the most utilized resource is given a consumer, wrap around and still continue placing using the ordering. Do this until there are no more consumers to place on the resource queue.

Is there a name for this?

Is there an algorithm for reducing CNFs further

I have a boolean formula in conjunctive normal form (CNF): $ (a\vee b \vee c) \wedge (a \vee b \vee \neg c) \wedge (x \vee y)$

I know that this can be simplified to: $ (a\vee b)\wedge (x \vee y)$ .

a) Is there an algorithm to decide if a CNF is already in the reduced form or not?

b) Is there an algorithm that can do this reduction in an manner more efficient than comparing each pair of clauses to see if any pairing can be reduced? I wish to automate this reducing for any CNF and am looking for any algorithms that I can borrow/implement.

What is the intuition behind Strassen’s Algorithm?

I came across Strassen’s algorithm for matrix multiplication, which has time complexity $ O(n^{2.81})$ , significantly better than the naive $ O(n^3). Of course, there have been several other improvements in matrix multiplication since Strassen, but my question is specific to this algorithm.

If you see the algorithm, you’ll notice that 7 matrices $ M_1$ to $ M_7$ have been defined as intermediate computation steps, and the final matrix product can be expressed in terms of these. I understand how to verify this claim, and arrive at the expression for the desired time complexity, but I’m unable to grasp the intuition behind this algorithm, i.e. why are the matrices $ M_1$ through $ M_7$ defined the way they are?

Thank you!