Is there a lore explanation why each setting might have different deities?

As far as I know, the multiverse of D&D 5E consists in different worlds all residing in the Material Plane, but the rest of the planes (Transitive Planes, Inner Planes, Outer Planes…) are shared between them.

Is there a lore explanation why each setting might have different deities? Since deities exist in planes other than the Material Plane, shouldn’t those deities be the same in every setting?

OpenSSH v2 Protocol Explanation

I’ve been searching high and low, trying to find a easily digestible protocol flow of SSH v2.

Does anyone have any docs or, could someone give me an example flow?

I’m interested in the flow of the public key exchange, things like:

  • does the server sign with the host private key, so the client can verify it’s not being spoofed by some MITM with a good server public key?

  • does the protocol use Ephemeral:Ephemeral ECDH by default? i.e. are session keys the product of ephemeral ECDH or ECDH with host/client authentication keys?

Minimum Cost Tree From Leaf Values solution explanation needed

I’m trying to understand the solution for the question posted here: https://leetcode.com/problems/minimum-cost-tree-from-leaf-values/

One of the solutions is to find the minimum leaf node, add the product with its smaller neighbor and remove it from the array and continue the same way until array size is one.

here is the link for the solution: https://leetcode.com/problems/minimum-cost-tree-from-leaf-values/discuss/339959/One-Pass-O(N)-Time-and-Space

What is the result binary tree? why is it working..

Thanks

Is there an explanation for what all of the DLCs add in with the Talisman Digital Edition? [closed]

In short I bought the Talisman Digital Edition earlier in the week (oddly reminisced about the game that I hadn’t played for 30 years in the morning bought it and the starter pack in the evening). Have now bought the Season Pass because of a spot Steam Sale.

I added all the DLC in but found myself a bit lost in terms of what was doing what (e.g. the Dragons and Dark amd Light Fate tokens totally threw me).

Have started another game just adding in the base game, Sacred Pool, Frostmarch, City, Dungeon and Reaper expansions & it seems a balanced kind of game.

Is there a preferred amount of DLC to add in (or a preferred set of what to add?) and is there any explanation for what they add to the game (e.g. the Dragon tokens have totally thrown me)?

Thanks in advance for any advice or suggestions.

Explanation of O(n2^n) time complexity for powerset generation

I’m working on a problem to generate all powersets of a given set. The algorithm itself is relatively straightforward:

def power_sets(A):     """Returns the power set of A as a list of lists"""      if len(A) == 0:         return [[]]      else:         ps_n1 = power_sets(A[:-1]) # Powerset of set without last element         # Add nth element to each subset of the powerset of n-1 elements         ps_n = [sub + [A[-1]] for sub in ps_n1]          # Combine the two sets and return          return ps_n1 + ps_n 

It’s clear that the space complexity is $ O(n2^n)$ , since there are $ n$ items in the set, and each element is in half of the $ 2^n$ sets.

However, the book that I’m working from says the time complexity is also $ O(n2^n)$ , which makes perfect intuitive sense, but I’m having a hard time justifying it mathematically.

Other than by saying “there are x number of items to mess with, so time complexity is at least as much”, can anyone offer an explanation of the runtime analysis based on the runtime of the statements in my algorithm?

This answer pretty much only says that the runtime is such because of the space complexity (not very satisfying, but as an aside–can the runtime ever be better than the space complexity?)

I saw this answer, but to be honest it is a bit difficult to understand. It seems to suggest in the last line that the runtime is $ O(n^2*2^{n!})$ (since (I think) $ |P_i| = 2^i$ ), and that doesn’t seem right either.

I tried drawing out the call tree and that didn’t help, as there are only $ n-1$ recursive calls made, and from what I can tell each spends $ O(n^2)$ time.

Anyway, can someone offer a compelling explanation for this runtime?

Detailed explanation of Perlin Noise algorithmic complexity

I am doing a project in analysis of algorithm and I have been looking all over for something more complex than Perlin Noise is $ O(n \cdot 2^n)$ because of the doubling in $ n$ dimensions and array operations. Anyone know where there is more information? I have another month before our group gives the presentation.

String matching problem needed some explanation

This is a question from CLRS book. (Chapter 32, string matching, the question is the problem for the whole chapter, it’s in the end of the chapter)

Let $ y^i$ denote the concatenation of string y with itself $ i$ times. For example, $ (ab)^{3}$ = $ ababab$ . We say that a string $ x\in X$ has repetition factor $ r$ if x = $ y^{r}$ for some string $ y\in X$ and some $ r > 0$ . Let $ p(x)$ denote the largest $ r$ such that $ x$ has repetition factor $ r$ . Give an efficient algorithm that takes as input a pattern $ P[1..m]$ and computes the value $ p(P_i)$ for $ i = 1, 2,\dots,m$ . What is the running time of your algorithm?

I found an answer like this: First compute Prefix function (based on the prefix function from the book), so we return $ π$ . Then, Suppose that $ π[i] = i −k$ . If $ k|i$ , we know that $ k$ is the length of the primitive root, so, the word has a repetition factor of $ \frac{i}{k}$ . We also know that there is no smaller repetition factor $ i$ . Now, suppose that we have $ k$ not dividing $ i$ . We will show that we can only have the trivial repetition factor of 1. Suppose we had some repetition $ y^{r} = \Pi$ . Then, we know that $ π[i] ≥ y^{r}−1$ . However, if we have it strictly greater than this, this means that we can write the $ y$ ’s themselves as powers because we have them aligning with themselves.

COMPUTE-PREFIX-FUNCTION (P) 1 m = P.length 2 let π be a new array 3 π[1]= 0 4 k = 0 5 for q = 2 to m 6    while k > 0 and P[k+1] != P[q] 7       k = π[q] 8    if  P[k+1] != P[q] 9       k = k + 1 10   π[q] = k 11 return π 

The prefix function is a part of KMP algorithm

KMP-MATCHER (T,P) 1 n= T.length 2 m =P.length 3 π=COMPUTE-PREFIX-FUNCTION (P) 4 q= 0 // number of characters matched 5 for i = 1 to n // scan the text from left to right 6    while q > 0 and P[q+1] != T[i] 7        q= π[q] // next character does not match 8    if P[q+1] == T[i+1] 9        q = q + 1 // next character matches 10   if q == m // is all of P matched? 11       print “Pattern occurs with shift” i - m 12       q=π[q] // look for the next match 

I can’t still really understand completely the answer. Why should we suppose $ k$ not dividing $ i$ ? And the explanation for the case of $ k$ not dividing $ i$ , the repetition factor is 1, is confusing to me.

String matching problem needed some explanation

This is a question from CLRS book. (Chapter 32, string matching, the question is the problem for the whole chapter, it’s in the end of the chapter)

Let $ y^i$ denote the concatenation of string y with itself $ i$ times. For example, $ (ab)^{3}$ = $ ababab$ . We say that a string $ x\in X$ has repetition factor $ r$ if x = $ y^{r}$ for some string $ y\in X$ and some $ r > 0$ . Let $ p(x)$ denote the largest $ r$ such that $ x$ has repetition factor $ r$ . Give an efficient algorithm that takes as input a pattern $ P[1..m]$ and computes the value $ p(P_i)$ for $ i = 1, 2,\dots,m$ . What is the running time of your algorithm?

I found an answer like this: First compute Prefix function (based on the prefix function from the book), so we return $ π$ . Then, Suppose that $ π[i] = i −k$ . If $ k|i$ , we know that $ k$ is the length of the primitive root, so, the word has a repetition factor of $ \frac{i}{k}$ . We also know that there is no smaller repetition factor $ i$ . Now, suppose that we have $ k$ not dividing $ i$ . We will show that we can only have the trivial repetition factor of 1. Suppose we had some repetition $ y^{r} = \Pi$ . Then, we know that $ π[i] ≥ y^{r}−1$ . However, if we have it strictly greater than this, this means that we can write the $ y$ ’s themselves as powers because we have them aligning with themselves.

COMPUTE-PREFIX-FUNCTION (P) 1 m = P.length 2 let π be a new array 3 π[1]= 0 4 k = 0 5 for q = 2 to m 6    while k > 0 and P[k+1] != P[q] 7       k = π[q] 8    if  P[k+1] != P[q] 9       k = k + 1 10   π[q] = k 11 return π 

The prefix function is a part of KMP algorithm

KMP-MATCHER (T,P) 1 n= T.length 2 m =P.length 3 π=COMPUTE-PREFIX-FUNCTION (P) 4 q= 0 // number of characters matched 5 for i = 1 to n // scan the text from left to right 6    while q > 0 and P[q+1] != T[i] 7        q= π[q] // next character does not match 8    if P[q+1] == T[i+1] 9        q = q + 1 // next character matches 10   if q == m // is all of P matched? 11       print “Pattern occurs with shift” i - m 12       q=π[q] // look for the next match 

I can’t still really understand completely the answer. Why should we suppose $ k$ not dividing $ i$ ? And the explanation for the case of $ k$ not dividing $ i$ , the repetition factor is 1, is confusing to me.