Why an error probability of 1/3 in BPP?

BPP is defined as the class of polynomial-time random algorithms which have an error probability of at most 1/3.

But why was 1/3 chosen? If we have an algorithm with some error probability less than 1/2, then we can run it several times, taking the most common result, to obtain an error probability of less than 1/3 while still staying in the same complexity class.

So why isn’t BPP instead defined as the algorithms which have an error probability of less than 1/2? Is there something special about 1/3?

Confidence intervals, Confidence levels and probability of simple tests

It seems to be a simple problem, but i cant figure it out

Lets say, I would like to know if there is some point to implement new feature. If we have to focus on the feature or not. Lets assume there is no possible some kind of test like questioning the users or whatever else. Its function will be easy, something like – for example “webcam for ecommerce for users that are paying premium account”.

To be specific, I have 1500 premium users. I can tell “Feature is used when atleast 75% of clients use it”. Great! We would like to run Fake Door test, where we implement just the button for webcam and when user click on it, we show him “we are implementing this feature right now, stay with us” or whatever else (i know, fake doors isnt the best method, but it is not the point of this). I will “test” it for 14 days. In 14 days, 350 clients will come on my site and they see this feature. 265 of clients clicks on the button.

What can I say about this feature? It seems like I can say “Yes, we have to implement it, because 75% of users will use this feature” (75% of 350 is 262.5 < 265) => H0 (Atleast 75% use this feature) seems to be ok. But it is not truth at all. Because there can be HUGE error (I tested ONLY around 23% of clients).

What I am trying to achieve is:
I would like to say – “With 95% confidence, 75% of clients will use this function, so we can implement it”.

I am lost of all confidence intervals, confidence levels and sample sizes, etc etc. Can someone help me how to get the confidence step-by-step and explain me, what can I count from those numbers (1500 premium users at all, 350 users saw the feature, 265 users used the feature).

Genome mutation probability

I am taking an extra course this semester and we were given a series of questions for exam preparation. But I was unable to attend the discussion session and so I have no access to the solutions. I hope that can get some help here.

Question:

Within an Evolutionary Algorithm a parent individual X(i) with a genome of L bit has created N offsprings X(i) = Y(i)n identical to the parent X(i).

To yield the new generation Y(i+1)n the mutation operator is modifying each of these N offspring Y(i)n by flipping each of the N∗L bits with a probability of p.

Derive a formula that calculates the probability Q for the case that none of the N new individuals Y(i+1)n is identical to the parent X(i).

I’ve got absolutely no idea on how to approach this. Any suggestions?

Tournament Selection Probability

I am taking an extra course this semester and we were given a series of questions for exam preparation. But I was unable to attend the discussion session and so I have no access to the solutions. I hope that can get some help here.

Question:

Within an Evolutionary Algorithm the probabilistic, rank based parent selection selects ρ = 4 parents from the population of P = 32 individuals.

The method shall be Tournament selection starting with k = 16 different individuals, chosen randomly from the population.

Calculate the probability ω that the best individual from the population (P = 32) is among the ρ = 4 selected parents.

My Solution:

Since tournament selection always selects the best individual out of the k initial individuals, the probability that the best individual from the entire population P is selected is equal to the probability of that individual to be randomly picked as one of those initial k = 16 individuals.

So for 1 trial,

ω = 1/32 

And for 4 trials, ω is the probability that the best individual is selected at least one of the 4, i.e.

ω = P(at least 1/4) = 1 - P(0/4)                     = 1 - (1 - 1/32)^4                     = 0.1192 

Does this seem correct?

conditional probability of dependent random variables

Suppose I have 3 random variables:

$ $ X \sim \mbox{Bernoulli}(1/2)$ $ $ $ Z \sim \mbox{Normal}(0,1)$ $ $ $ Y = X+Z$ $

How do I compute the conditional probability:

$ $ P(X=1 | Y=y)$ $

Attempt1:

Probability[ X == 1 \[Conditioned] X + Z == y,             {             X \[Distributed] BernoulliDistribution[1/2]            ,Z \[Distributed] NormalDistribution[]            }          ] 

Attempt2:

D[Probability[ X == 1 \[Conditioned] X + Z >= y,             {             X \[Distributed] BernoulliDistribution[1/2]            ,Z \[Distributed] NormalDistribution[]            }          ],y] 

Attempt3:

Likelihood[       TransformedDistribution[X + Z,                     {                     X \[Distributed]BernoulliDistribution[1/2],                     Z \[Distributed] NormalDistribution[]}]            , {y}] 

Pencil and Paper attempt:

$ $ P(X=1 | Y=y) = \frac{P(X=1 , Y=y)}{P(Y=y)}$ $ $ $ = \frac{P(X=1 , X+Z=y)}{P(Y=y)}$ $ $ $ = \frac{P(X=1)P(Z=y-1)}{P(Y=y)}$ $ $ $ = \frac{P(X=1)P(Z=y-1)}{P(X=1)P(Z=y-1)+P(X=0)P(Z=y-0)}$ $


$ $ P(Z=y)=\frac{e^{-\frac{y^2}{2}}}{\sqrt{2 \pi }}$ $ $ $ P(Z=y-0)=\frac{e^{-\frac{y^2}{2}}}{\sqrt{2 \pi }}$ $ $ $ P(Z=y-1)=\frac{e^{-\frac{1}{2} (y-1)^2}}{\sqrt{2 \pi }}$ $ $ $ P(X=1)=\frac{1}{2}$ $ $ $ P(X=0)=\frac{1}{2}$ $


$ $ P(X=1 | Y=y) = \frac{e^{-\frac{1}{2} (y-1)^2}}{2 \sqrt{2 \pi } \left(\frac{e^{-\frac{y^2}{2}}}{2 \sqrt{2 \pi }}+\frac{e^{-\frac{1}{2} (y-1)^2}}{2 \sqrt{2 \pi }}\right)}$ $

$ $ P(X=1|Y=y) = \frac{e^y}{e^y+\sqrt{e}}$ $

The Probability of Success: Advantage vs. Disadvantage (vs. standard d20) [on hold]

I was curious about the probability change when a character has advantage vs. disadvantage when rolling a d20 in D&D. Here’s what I came up with:

enter image description here

(For anyone who wants to check my math or see the specific percentages, here’s the sheet I used)

https://docs.google.com/spreadsheets/d/13oZBWKcBJy-_WoGGIkhE5kKrePtNrc3u2URu2B1gbmA/edit#gid=0

If my numbers are correct, here are my takeaways:

  1. With a single d20 a DC of 11 is a 50/50 chance; for advantage 15 is 50/50, and disadvantage it’s 7

  2. Rolling an 11 with disadvantage is twice as hard; rolling an 11 with advantage is 50% more likely

  3. It’s a pretty good system to represent experience (easy things are easier) vs. being a novice (everything is hard).

Thoughts? Corrects to my math?

Probability of sum of 2 random dice out of a 3d6 pool

With AnyDice it’s pretty easy to calculate probalities for highest and lowest 2 of a 3d6 pool, namely with:

output [highest 2 of 3d6] output [lowest 2 of 3d6] 

However, this has a bias towards the highest and lowest thrown dice. What I want to calculate is the possible results, without bias. Reasoning behind this is that I want my players to control the outcome. It’s not necessarily that the highest or lowest outcome are worse or better, it’s simply that I want to offer them a decision. They choose two of the dice, add them together and there is a result. I want to give the luck d20 roll with result such as an encounter more meaning and mental impact (“why did you pick those dice!”).

I had hoped AnyDice to have a random function, something like [random 2 of 3d6] but that doesn’t exist. My hypothesis was that I could simply add the percentages of [highest 2 of 3d6] and [lowest 2 of 3d6] and divide that number by 2 (since I’m adding two probability calculations with a total of 100%).

But somehow this doesn’t feel right. It doesn’t include the possibility of a player picking the highest and the lowest number instead of the two highest or lowest.

I’ve been doing some tutorials in AnyDice and I reckon this definitely CAN be done with a function where the following would happen:

Roll 3d6. Then also roll a d3 twice (not 2d3 as it would add up). If the d3 rolls are equal, reroll one until you get two unique d3 rolls. Then use the unique d3 rolls and take those dice from the 3d6 pool. Add those dice together, show results. 

An approach of this chance could be that I take simply the average of a single die in the 3d6 pool and then multiply by 2, theoretically approaching all the possible results. This is incorrect as well as it includes all three dice and thus the average can go higher than the max of 2d6.

Perhaps I’m overthinking this calculation by using AnyDice. As the the dice order isn’t relevant at all, I simply need to know all possible dice combinations a 3d6 pool can have. Not the sum, but the combinations. This is super simple, because every dice has 6 sides. So 3d6 has 6 * 6 * 6 = 216 total combinations, this includes repetition as I am interested in the probability of each throw. However, I again don’t need all three dice. Only 2, which for the sake of calculation can be presumed to be picked randomly.

Another option I can think of in AnyDice is:

Roll 3d6 and 1d3. Remove from 3d6 sequence the number in position 1d3. Add the remaining sequence's result and output probabilities. 

Okay, long wall of text, but I am just not familiar enough with AnyDice to figure this out. Any help is greatly appreciated.

Cuckoo hashing with a stash: how tight are the bounds on the failure probability?

I was reading this very good summary of Cuckoo hashing.

It includes a result (page 5) that:

A stash of constant sizes reduces the probability of any failure to fall from $ \Theta(1/n)$ to $ \Theta(1/n^{s+1})$ for the case of $ d= 2$ choices

It references the paper KMW08. But KMW08 only has the result (Theorem 2.1) that:

For every constant integer $ s \geq 1$ , for a sufficiently large constant $ \alpha$ , the size $ S$ of the stash after all items have been inserted satisfies $ Pr(S \geq s) =O(n^{-s})$ .

Note that the $ s$ in the different theorems is slightly different, in the first if the stash is of size $ s$ , it is not a failure, in the second, if the stash is of size $ s$ it is a failure. This is why the first has $ s+1$ and the second has $ s$ .

The difference between the two is then that the first uses theta-notation, whereas the second uses big-O notation. So my questions:

  • Do we know that the failure probability is $ \Omega(n^{-(s+1)})$ ?
  • If so, do we know the constants in the $ \Theta(n^{-(s+1)})$ expression?

And if so, which papers presented these results?