Surprise confusion

I’ve been reading information about the surprise mechanic and still confused on which to use. There are 3 methods of Surprise that I see all over the internet.

First: Group Surprise Check

To make a Group Surprise Check, half or majority of the PCs must beat the highest passive perception of the monsters to succeed the surprise.

Second: Fail one Stealth, not all is surprised.

If one of the PCs rolls (Stealth) lower than one of the passive perception of the monsters, the Surprise is botched for everyone.

Third: Some are surprised, some are not.

PC 1 rolls 14 PC 2 rolls 14 PC 3 rolls 12 PC 4 rolls 11

Monster 1 with PP of 15 Monster 2 with PP of 15 Monster 3 with PP of 13 Monster 4 with PP of 10

PC 1 and 2 surprises Monster 3 and 4, but not Monster 1 and 2

I know the Group Check can be an optional check for this. But what about the Second and Third checks? Which are true? Which should be used?

Contingency Table Confusion NLP

Hello for the contingency table: [true positive, false negative, false positive, true negative]. I am having a hard time remembering the difference between these terms because all the terms are composed of words which have a high similarity with each other but they are used in such opposing contexts. The only ones that make sense is true positive and false negative but the other ones I always get mixed up and is wondering is there some quick mnemonic I can use?

AD&D editions confusion – which is which?

Recently I started googling out old AD&D books but I stumbled upon very strange mix-up in terms of edition naming convention. For example listing on this site:

…have most of the Greyhawk book covers but strangely enough many times newer books are named AD&D while older ones have AD&D 2nd Edition in the title. Same happens for Forgotten Realms listings and other setting books too. I’ve read somewhere that AD&D 1 and 2 had some small overlap in terms of releases but it doesn’t explain scale of the issue and the examples below.

How am I supposed to know which books are actually of 2nd Edition and which are of the 1st Edition?

Even assuming small overlap mentioned above and some reprints which I identified I’m still finding books released 5+ years into the 2nd Edition lifespan yet still with old 1st Edition logo.

Was the release schedule that insanely messy back in the days? Or did they drop the “2nd Edition” part of logo for some reason? Or did something else happen there?

Examples of the problem:

  • Greyhawk Players Guide 1998, not a reprint (at least not of anything I can find), nearly 10 years into the 2nd Edition lifespan but with a 1st Edition Logo. Also released long after other Greyhawk 2nd Edition content which makes it even more odd.
  • The Scarlett Brotherhood 1999, also not a reprint, also has 1st Edition logo. Even Wikipedia says it’s for 2nd Edition though
  • Silver Anniversary Updated Modules 1999, on the first page there is information it’s content updated for 2nd Edition yet cover still has 1st Edition logo.

There are of course many more examples, ones I found so far are mostly of setting specific books (Forgotten Realms, Greyhawk, Dark Sun, etc.).

Confusion about P versus NP

I’m sure that in my following question my reasoning is extremely simplistic and flawed, but I think if someone answered this it would help me understand what the P vs NP conundrum is. So here is my question: why is the following not a proof that NP does not equal P?

Scenario: A computer is given an n digit number that it must guess. The digits of this number were randomly chosen. Since the digits are randomly chosen, there is no pattern for a computer to spot and therefore simplify the problem. It must try all solutions, of which there are 9^n.

Does the problem with my reasoning lie with the assumption that the numbers are truly random? Is randomness impossible and there will always be an underlying pattern to how seemingly “random” numbers were chosen

confusion with worst case for direct addressing

In CLRS there is an exercise for direct addressing data structure

Suppose that a dynamic set S is represented by a direct-address table T of length m. Describe a procedure that finds the maximum element of S. What is the worst-case performance of your procedure?

for this question i have seen a lot of websites indicating the worst case possibility is O(m) by starting and examining every non null slot up to end and then returning the maximum index, but i think it can be done in a faster way than that. If direct addressing tables has m slots , then obviously the highest value should be in the end (i thought keys are treated as indexes in direct addressing,correct me if i am wrong) , so if we start iterating from the end then worst case would be O(m-n) where n is the number of elements examined by our iterator before reaching non null value from end of the direct addressing table

is something wrong with my approach? Can any one clear this topic?

Loop invariant initialisation confusion

Consider the algorithm LastMatch below, which returns the offset (shift) of the last occurrence of the pattern P in text T, or -1 if P does not occur in T:

LastMatch(T,P)  for(s = T.length - P.length downto 0)    j = 1    while(j =< P.length and P[j] == T[s + j])      j++    if(j == P.length + 1)      return s  return -1 

I’ve been given a loop invariant for the while loop:

$ \forall k(1 \leq k<j \rightarrow P[k] ==T[s+k])$

The initialisation of this invariant confuses me. Before we enter the while loop, $ j=1$ . So we’re asking is there a $ 1\leq k<1$ such that $ P[k] ==T[s+k]$ ?

I cannot find a $ k$ which satisfies this inequality, so I do not understand what this is saying. So why is the invariant satisfied before we enter the loop? Is it because when I cannot find a $ k$ it implies that $ P[k]$ and $ T[s+k]$ are both equal to the empty set?

AsymptoticLess function confusion

I have an algorithm which complexity is:

f[n_] := 1/2 (-1 + n) n + 1/2 (-1 + n) n (-1 + 1/2 (-1 + n) n) +    1/4 (-1 + n) n (-2 + 1/2 (-1 + n) n) (-1 + 1/2 (-1 + n) n) +    1/12 (-1 + n) n (-3 + 1/2 (-1 + n) n) (-2 + 1/2 (-1 + n) n) (-1 +       1/2 (-1 + n) n) +    1/48 (-1 + n) n (-4 + 1/2 (-1 + n) n) (-3 + 1/2 (-1 + n) n) (-2 +       1/2 (-1 + n) n) (-1 + 1/2 (-1 + n) n) +    6 Binomial[1/2 (-1 + n) n, 6] + 7 Binomial[1/2 (-1 + n) n, 7] +    8 Binomial[1/2 (-1 + n) n, 8] + 9 Binomial[1/2 (-1 + n) n, 9] +    10 Binomial[1/2 (-1 + n) n, 10] 

I’d like to bound it for values of 3 <= n <= 10. If I plot the function and go trial and error method I can see that for n <= 10, O(n^11) bounds f[n] just fine. However, if I evaluate:

AsymptoticLessEqual[f[n], g(x), n -> 10] 

I get True every time, no matter what g(x) is. For example

AsymptoticLessEqual[f[n], n, n -> 10] 

Will also output True. This is not the result I want because if I plot both functions, clearly n doesn’t bound f[n] in the range previously mentioned.

In the end I’d like to prove that f[n] = O(n^11) or something among those lines. Am I using the wrong function or am I using it mistakenly?

Thanks for your time.