Distributional error probability of deterministic algorithm implies error probability of randomized algorithm?

Consider some problem $ P$ and let’s assume we sample the problem instance u.a.r. from some set $ I$ . Let $ p$ be a lower bound on the distributional error of a deterministic algorithm on $ I$ , i.e., every deterministic algorithm fails on at least a $ p$ -fraction of $ I$ .

Does this also imply that every randomized algorithm $ \mathcal{R}$ must fail with probability $ p$ if, again, we sample the inputs u.a.r. from $ I$ ?

My reasoning is as follows: Let $ R$ be the random variable representing the random bits used by the algorithm. \begin{align} \Pr[ \text{$ \mathcal{R}$ fails}] &= \sum_\rho \Pr[ \text{$ \mathcal{R}$ fails and $ R=\rho$ }] \ &= \sum_\rho \Pr[ \text{$ \mathcal{R}$ fails} \mid R=\rho] \Pr[ R=\rho ] \ &\ge p \sum_\rho \Pr[ R=\rho ] = p. \end{align} For the inequality, I used the fact that once we have fixed $ R = \rho$ , we effectively have a deterministic algorithm.

I can’t find the flaw in my reasoning, but I would be quite surprised if this implication is true indeed.

Offline bin-packaging problem: probability of a non-optimal solution for the first-fit-decreasing algorithm

For the offline bin packaging problem (non-bounded number of bins, where each bin has a fixed size, and a input with known size that can be sorted beforehand), the first-fit-decreasing algorithm (FFD) gives a solution whose number of bins is, at most, $ \frac{11}{9}\times S_{opt} + \frac{6}{9}$ , or, for the sake of simplification, around $ 23\%$ bigger than the optimal number of bins ($ S_{opt}$ ).

Has the probability of getting a non-optimal solution using FFD been ever calculated? Or, in other words, what is the probability of getting a solution whose exact size is $ S_{opt}$ ? Or do we have no other choice than assuming that the solution size is evenly distributed in the interval $ [S_{opt}, \frac{11}{9}\times S_{opt} + \frac{6}{9}]$ ? Or, as another alternative I can think of right now, is the solution size so dependant on the input that making this question has no sense at all?

And, as a related question, is there any research about what is the NP-hard or NP-complete problem that has an approximation algorithm (of polynomial asymptotic order) with the highest probability of providing an optimal solution?

Probability of winning a turn-based game with a random element

I am preparing for a programming exam on probability theory and I stumbled across a question I can’t solve.

Given a bag, which contains some given amount of white stones $ w$ and some given amount of black stones $ b$ , two players take turns drawing stones uniformly at random from the bag. After each player’s turn a stone, chosen uniformly at random, vanishes, and only then does the other player take their turn. If a white stone is drawn, the player, who has drawn it, instantly loses and the game ends. If the bag becomes empty, the player, who played second, wins.

What is the overall probability that the player, who played second, wins?

I assume it’s a dynamic programming question, though I can’t figure out the recursion formula. Any help would be greatly appreciated. 🙂

Example input: $ w$ = 3, $ b$ = 4, then the answer is, I believe, 0.4, which I arrived at after computing by hand all possible ways for the game to go, so not very efficient.

How to calculate probability of values under Weibull distribution?

I have a Genomic data that shows the interaction between genomic regions that I would like to understand which interactions are significant statistically.

Dataset look likes:

chr  start1   end1   start2   end2   normalized count  1     500    1000   2000     3000       1.5  1     500    1000   4500     5000       3.2  1     2500   3500   1000     2000       4 

So, I selected a random number of data (as background) and fitted the normalized frequency into the Weibull distribution using fitdistrplus R packages and estimated some parameters like scale and shape for those sets of data (PD = fitdist(data$ normalized count,'weibull')).

Now I would like to calculate the probability of each observation (like a p-value for each data point) under the fitted Weibull distribution.

But I do not know how can I do that, Can I calculate the Mean of distribution then calculated Z-statistic for each observation and convert it to the p-value?

for example:

The random background that fitted to Weibull using the below parameters:

scale:0.12 shape:023 Mean: 20 Var:12 

How can I calculate the probability of sets of data like (1.2,2.3,4.5,5.0,6.1)?

A player rolls four 20-sided dice, takes the lowest value, ignores the rest. What is the probability of this value at least 7?

I’m designing a tabletop game, and I need to figure out how to calculate a few probabilities:

  1. Roll 3 20-sided dice, take the highest value. What is the probability of it being 7 or higher? 15 or higher?
    1. Roll 4 20-sided dice, take the highest value. What is the probability of it being 7 or higher? 15 or higher?
    1. Roll 3 20-sided dice, take the lowest value. What is the probability of it being 7 or higher? 15 or higher?
    1. Roll 4 20-sided dice, take the lowest value. What is the probability of it being 7 or higher? 15 or higher?

How can I do this? Could you explain to me how this works, or even better – give me a simple formula?