How does Halfling Luck affect the probability of surviving my death saves?

The probability of a mere human surviving a series of unmodified death saving throws has been established as 59.5125%. But Halfling Luck applies to death saving throws in a dramatic way: A roll of 1 on a death save normally counts as two failures, but halflings are much less likely to suffer this result.

Previous work in the field of calculating the effects of Halfling Luck does not apply directly to death saving throws, because of this asymmetry between races concerning how terrible it is to roll a 1.

How much more likely is it that a halfling will survive a series of death saving throws as opposed to one of those poor unfortunate nonhalflings?

outcomes and payoff in probability games

I am reading probability from AI point of view from book Artificial Intellegence A Modern approach by Russel and Norving. Below is text snippet from chapter 13.

One argument for the axioms of probability, first stated in 1931 by Bruno de Finetti (and translated into English in de Finetti (1993)), is as follows: If an agent has some degree of belief in a proposition a, then the agent should be able to state odds at which it is indifferent to a bet for or against a. Think of it as a game between two agents: Agent 1 states, “my degree of belief in event a is 0.4.” Agent 2 is then free to choose whether to wager for or against a at stakes that are consistent with the stated degree of belief. That is, Agent 2 could choose to accept Agent 1’s bet that a will occur, offering $ 6 against Agent 1’s $ 4. Or Agent 2 could accept Agent 1’s bet that ¬a will occur, offering $ 4 against Agent 1’s $ 6. Then we observe the outcome of a, and whoever is right collects the money. If an agent’s degrees of belief do not accurately reflect the world, then you would expect that it would tend to lose money over the long run to an opposing agent whose beliefs more accurately reflect the state of the world.

If Agent 1 expresses a set of degrees of belief that violate the axioms of probability theory then there is a combination of bets by Agent 2 that guarantees that Agent 1 will lose money every time.

Below shows that if Agent 2 chooses to bet $ 4 on a, $ 3 on b, and $ 2 on ¬(a ∨ b), then Agent 1 always loses money, regardless of the outcomes for a and b. De Finetti’s theorem implies that no rational agent can have beliefs that violate the axioms of probability

enter image description here

My question is how we got values inf last column for example a,b column how we got -6, -7 an 2. Kindly explain.

Thanks for your time

Distributional error probability of deterministic algorithm implies error probability of randomized algorithm?

Consider some problem $ P$ and let’s assume we sample the problem instance u.a.r. from some set $ I$ . Let $ p$ be a lower bound on the distributional error of a deterministic algorithm on $ I$ , i.e., every deterministic algorithm fails on at least a $ p$ -fraction of $ I$ .

Does this also imply that every randomized algorithm $ \mathcal{R}$ must fail with probability $ p$ if, again, we sample the inputs u.a.r. from $ I$ ?

My reasoning is as follows: Let $ R$ be the random variable representing the random bits used by the algorithm. \begin{align} \Pr[ \text{$ \mathcal{R}$ fails}] &= \sum_\rho \Pr[ \text{$ \mathcal{R}$ fails and $ R=\rho$ }] \ &= \sum_\rho \Pr[ \text{$ \mathcal{R}$ fails} \mid R=\rho] \Pr[ R=\rho ] \ &\ge p \sum_\rho \Pr[ R=\rho ] = p. \end{align} For the inequality, I used the fact that once we have fixed $ R = \rho$ , we effectively have a deterministic algorithm.

I can’t find the flaw in my reasoning, but I would be quite surprised if this implication is true indeed.

Offline bin-packaging problem: probability of a non-optimal solution for the first-fit-decreasing algorithm

For the offline bin packaging problem (non-bounded number of bins, where each bin has a fixed size, and a input with known size that can be sorted beforehand), the first-fit-decreasing algorithm (FFD) gives a solution whose number of bins is, at most, $ \frac{11}{9}\times S_{opt} + \frac{6}{9}$ , or, for the sake of simplification, around $ 23\%$ bigger than the optimal number of bins ($ S_{opt}$ ).

Has the probability of getting a non-optimal solution using FFD been ever calculated? Or, in other words, what is the probability of getting a solution whose exact size is $ S_{opt}$ ? Or do we have no other choice than assuming that the solution size is evenly distributed in the interval $ [S_{opt}, \frac{11}{9}\times S_{opt} + \frac{6}{9}]$ ? Or, as another alternative I can think of right now, is the solution size so dependant on the input that making this question has no sense at all?

And, as a related question, is there any research about what is the NP-hard or NP-complete problem that has an approximation algorithm (of polynomial asymptotic order) with the highest probability of providing an optimal solution?

Probability of winning a turn-based game with a random element

I am preparing for a programming exam on probability theory and I stumbled across a question I can’t solve.

Given a bag, which contains some given amount of white stones $ w$ and some given amount of black stones $ b$ , two players take turns drawing stones uniformly at random from the bag. After each player’s turn a stone, chosen uniformly at random, vanishes, and only then does the other player take their turn. If a white stone is drawn, the player, who has drawn it, instantly loses and the game ends. If the bag becomes empty, the player, who played second, wins.

What is the overall probability that the player, who played second, wins?

I assume it’s a dynamic programming question, though I can’t figure out the recursion formula. Any help would be greatly appreciated. 🙂

Example input: $ w$ = 3, $ b$ = 4, then the answer is, I believe, 0.4, which I arrived at after computing by hand all possible ways for the game to go, so not very efficient.

How to calculate probability of values under Weibull distribution?

I have a Genomic data that shows the interaction between genomic regions that I would like to understand which interactions are significant statistically.

Dataset look likes:

chr  start1   end1   start2   end2   normalized count  1     500    1000   2000     3000       1.5  1     500    1000   4500     5000       3.2  1     2500   3500   1000     2000       4 

So, I selected a random number of data (as background) and fitted the normalized frequency into the Weibull distribution using fitdistrplus R packages and estimated some parameters like scale and shape for those sets of data (PD = fitdist(data$ normalized count,'weibull')).

Now I would like to calculate the probability of each observation (like a p-value for each data point) under the fitted Weibull distribution.

But I do not know how can I do that, Can I calculate the Mean of distribution then calculated Z-statistic for each observation and convert it to the p-value?

for example:

The random background that fitted to Weibull using the below parameters:

scale:0.12 shape:023 Mean: 20 Var:12 

How can I calculate the probability of sets of data like (1.2,2.3,4.5,5.0,6.1)?