Optimal strategy for tossing three dependent coins

Suppose that I have three correlated coins. The marginal probability of Head of coin $ i$ is denoted by $ p_i$ .

The conditional probability of head for coin $ i$ given the outcomes of coin $ j$ and $ k$ is denoted by $ p_i|x_j,x_k$ , where $ x_j,x_k\in\{H,T\}$ . We can similarly construct the conditional probability of $ i$ given $ x_j$ .

Each coin can be tossed at most once and you receive a $ 1 for a head and -$ 1 for a tail. You don’t have to toss all the coins, and your objective is to maximize the total reward.

What would be the optimal sequence of tossing coins in this case?

If the coins are independent of each other, the order wouldn’t matter. The optimal strategy should be “flip coin i if $ p_i>\frac{1}{2}$ “. For the case of two coins, it can be shown that it is always the best to flip the coin with a higher marginal $ p_i$ . However, this doesn’t have to be optimal for three coin cases. I’ve been thinking about this problem for a quite long time but can’t come up with a general solution or an intuition that might help..

What strategy should I use to sovle this interview problem? May I apply DP on this?

The problem description is as below and it feels like a DP problem but I am not sure, thank you for helping!

You have a certain dose of a drug, say, 200 milliliters, and now some patients need this drug. The doses for each patient may vary from person to person, for example, 2.5 milliliters for A, B, C, 5 milliliters for D, E, and 7 milliliters for F, and so on. The question is, in short, how can you allocate the drugs so that you have the least amount of drugs left? Example input: total drug dose 10 (milliliters). A needs 3, B needs 5, C needs 2, D needs 4, E needs 2. Output: A, B, C or A, B, E (perfect allocation with no drugs left). Note: there may be decimals.

You are welcome to give any hints or mature solutions to the question, or you can ask for more details about it.

Proof strategy to show that an algorithm cannot be implemented using just hereditarily terminating procedures

I am taking my question here from there. Consider the following scenario:

You are given a fixed programming language with no nonlocal control flow constructs. In particular, the language does not have

  • Exceptions, first-class continuations, etc.
  • Assertions, in the sense of “runtime tests that crash the program if they fail”.

Remark: An example of such a language could be Standard ML, minus the ability to raise exceptions. Inexhaustive pattern matching implicitly raises the Match exception, so it is also ruled out.

Moreover, you are forced to program using only hereditarily terminating values. Inductively, we define hereditarily terminating values as follows:

  • Data constructors (including numeric and string literals) are hereditarily terminating.
  • Applications of data constructors to hereditarily terminating arguments are hereditarily terminating.
  • A procedure f : foo -> bar is hereditarily terminating if, for every hereditarily terminating x : foo, evaluating the expression f x always terminates and the final result is a hereditarily terminating value of type bar.

Remarks:

  • Hereditarily terminating procedures need not be pure. In particular, they may read from or write to a mutable store.

  • A procedure is more than just the function it computes. In particular, functions do not have an intrinsic asymptotic time or space complexity, but procedures do.


Hereditarily terminating procedures formalize my intuitive idea of “program that is amenable to local reasoning”. Thus, I am interested in what useful programs one can write using only hereditarily terminating procedures. At the most basic level, programs are built out of algorithms, so I want to investigate what algorithms are expressible using only hereditarily terminating procedures.

Unfortunately, I have hit an expressiveness ceiling much earlier than I expected. No matter how hard I tried, I could not implement Tarjan’s algorithm for finding the strongly connected components of a directed graph.

Recall that Tarjan’s algorithm performs a depth-first search of the graph. In addition to the usual depth-first search stack, the algorithm uses an auxiliary stack to store the nodes whose strongly connected components have not been completely explored yet. Eventually, every node in the current strongly connected component will be explored, and we will have to pop them from the auxiliary stack. This is the step I am having trouble with: The loop that pops the nodes from the stack terminates when a given reference node has been found. But, as far as the type checker can tell, the reference node could not be in the stack at all! This results in an extra control flow path in which the stack is empty after popping everything from it and still not finding the reference node. At this point, the only thing the algorithm can do is fail.

This leads to the following…

Conjecture: Tarjan’s algorithm cannot be implemented in Standard ML using only hereditarily terminating procedures.

My questions are:

  1. What kind of proof techniques would be necessary to prove the above conjecture?

  2. What is the bare minimum type system in which Tarjan’s algorithm can be expressed as a hereditarily terminating program? That is, what is the bare minimum type system that can “understand” that the auxiliary stack is guaranteed to contain the reference node, and thus will not add a control flow path in which the auxiliary stack is empty before the reference node has been found?


Final remark: It is possible to rewrite this program inside a partiality monad. Then every procedure would be a Kleisli arrow. Instead of

val tarjan : graph -> scc list 

we would have something like

val tarjan : graph -> scc list option 

But, obviously, this defeats the point of the exercise, which is precisely to take out the procedure out of the implicit partiality monad present in most programming languages. So this does not count as a solution.

Decide which player has winning strategy in maximum matching problem

Given the following game: Two players, player 1 and player 2, play a game in which the first player starts naming a hero $ h_1$ , then player 2 responds with a villain $ v_1$ who has played in the same movie as $ h_1$ . Then player 1 responds with another hero $ h_2$ who has played in the same movie as $ v_1$ , and so forth. Each hero and villain can only be used once. The first player that gets stuck, has no more hero/villain, loses the game. Note that player 1 always starts.

The two players may only pick heroes and villains from given sets of heroes $ H$ and villains $ V$ ($ |H| = |V| \geq 1$ ). They also get a set of movies $ M$ with the corresponding heroes and villains appearing in that movie.

The question is: can you, based on $ H$ , $ V$ and $ M$ , decide which player has the winning strategy?


Example:

Given the following data: the heroes are Iron Man, Captain America, Thor and Spider-Man. The villains are Whiplash, Ultron, Thanos and Vulture. The movies are Avengers: Infinity War (stars Iron Man, Captain America, Thor, Thanos and Spider-Man) and Spider-Man: Homecoming (stars Iron Man, Vulture and Spider-Man). Can you decide which player has the winning strategy?


My approach is to use maximum bipartite matching to find out which player has the winning strategy, because we can split the data in two sets, namely $ H$ and $ V$ and have relations between those two sets. The Hopcroft-Karp algorithm can take two of such sets and find out the maximum cardinality. Please correct me if I’m wrong: in the cases in which there is a perfect matching, player 2 wins and otherwise player 1 wins. Whenever there is a perfect matching, it means that player 2 has always had an answers to the hero that player 1 named.

How would you solve this? Is there a better, more efficient solution than some maximum bipartite matching.

How can I improve combat so my players don’t always use the strategy of focusing fire on one enemy at a time until it’s dead?

I’m DMing a campaign on 5e with a group of four players. We’re all experienced in RPG in general but not specifically on 5e.

Players are Level 4. Wizard, Fighter, Rogue and Druid, Circle of the Moon.

My players have come to the conclusion that, given the mechanics of the game, is much more effective to focus all the fire power on a creature at a time and avoid spreading damage. Their logic is it really doesn’t matter if a creature has 1 or 80 HP left, as longs a it has over 0, he has all capacity to do damage. In effect, creatures are binary, they are either alive and therefore have full capacity to act, or death, in which case they don’t.

Unfortunately I agree with this assessment but I feel it makes the game less fun. Not because I’m looking for super realistic combat but because it limits the combat strategy to “drop them one at a time”.

As such, they tend to not distribute their efforts or engage separately but, instead, swarm into a single enemy, concentrate all the attacks and then move to the next. This feels to me like the more effective tactic but also the least “fun” and role playing way of doing combat.

Is my players interpretation wrong or am I handling the combat in the wrong way? What am I missing?

How to avoid a boring late game in strategy games while still keeping victories satisfying?

A common thing I’ve noticed in strategy games (of all types, 4X, RTS, MOBA, etc.) is that most games eventually get to a point where it is fairly clear who is going to win, and the rest of the game just becomes playing out the motions, and if the winning player/team doesn’t make a major misstep, they will win.

This is just kind of the nature of strategy games. They inherently have a “snowball” effect. The gameplay is all about setting yourself up for success over your opponents in the future, and whoever does this better in the earlier stages of the game should win in the later stages. This happens in every strategy game to some extent, even the most classic. In Chess, it becomes increasingly harder to win if your opponent takes more and more of your pieces and forces your remaining pieces into tough situations.

As I said, this is just a fundamental part of the genre, so I’d hesitate to call it a problem. However, on occasion, in these types of games, you have matches where no player/team gains a significant advantage early, and the game comes down to the last turn. In my opinion, these are the most exciting and interesting matches you can have. Furthermore, when this doesn’t happen, the late stages of the game can feel very boring for everyone involved, where the winning player is just awaiting their inevitable victory, and the losing player their inevitable demise (this can be especially unfun for the losing player, as they probably have very few options, and it is just really unlikely that they are having a good time).

So it would be cool if we could design a strategy game that avoids consistently falling into this state, right? Well, I have seen a handful of games like this, where a losing player consistently has avenues to victory, no matter how far behind they are. The issue with this is that if an upset happens (say one player was dominating the whole game, and then a losing player makes one good play at the end of the game to win), that victory can feel very unsatisfying for the winning player, as they may feel they didn’t deserve it. Similar, the player who was winning most of the game may be very unhappy, as they may feel like victory was robbed from them, and they didn’t deserve to lose. So essentially, no one is happy with the result. This approach may also make the early game less fun, as players may feel like it just doesn’t matter.

So is it possible to design a strategy game that avoids both of these issues? A game where we don’t consistently fall into a boring lategame with a forgone conclusion, and yet also keep victories feeling satisfying and deserved? Or are these issues far too fundamental to strategy gameplay to overcome?

If this question is too vague on its own, then we can focus on 4X strategy games, as those are the games I have experience with, and that I am interested in designing.

Strategy for effective vulnerability research

I have been working on exploit development, and reverse engineering for a few months approx 1 year , and 2-month full time, but I have some doubts after gaining solid knowledge. I want to ask non-technical questions. for example, I am at the main function of adobe reader dc or Foxit, but what next? there are many blocks, and it is easy to get lost over it, and we won't reverse engineering all the product because it is endless, So the question is. how can one find vulnerabilities path or reverse specific blocks? I was thinking about fuzzing and only reverse the crash blocks, but the time I am waiting for a crash. I can use it for doing another kind of analysis. what would you recommend to me? I have been using tools so far like boofuzz, peachfuzz, and I have been using a bit winafl + dynamorio, google sanitizers, libfuzzer, and other tools.