Question about dice notation

I’m trying to adapt GURPS 4th edition’s system creation system into a Python script and I’m trying to understand the dice notation.

In particular:

Roll 1d-4 (minimum 0) to determine the number of major moons orbiting a terrestrial planet. If the planet has no major moons, it will have 1d-2 (minimum 0) moonlets.

Modifiers (for both rolls): Do not roll if the planet is within 0.5 AU of the primary star, -3 if the planet is between 0.5 AU and 0.75 AU of the primary star, -1 if the planet is between 0.75 AU and 1.5 AU of the primary star, -2 if the planet is Tiny, -1 if it is Small, +1 if it is Large.

I’ve not seen the "1d-4" notation before, so my assumption would be that you role 4 sided dice (or in my case, get a random value between 1 and 4 inclusive) and apply modifiers to the result, e.g.: For a Small planet at 0.8AU from it’s primary I would: Get value between 1-4 Apply -1 to that value (between 0.75AU and 1.5AU) Apply -1 to that value as the planet is Small

The catch here, is that, if I’m reading the notation correctly (which I don’t think I am), 1d-2 means pick a value of 1 or 2 then apply modifiers, which will always results in 0 or less.

How do you read 1d-4 and 1d-2 ?

How to Convert the complex notation into x, y-coordinates?

Convert the complex notation into x, y-coordinates; see also Accumulate and ListLinePlot.
enter image description here

I try to use the table to list all the summation value.( I suppose upper bounds N is 5)

plot1 = Table[{N,Sum[e^(2*Pi*I*n*Sqrt[2]),{n,1,N}]},{N,1,5}]; ListLinePlot[plot1]; 

But I just got an empty axis.
I think I need to convert the complex notation, I don’t know how to convert it.
Could you give me some suggestions?
Thank you!

Why do we need a separate notation for П-types?


Main

I am confused about the motivation behind the need for a separate notation for П-types, that you can find in type systems from λ2 on. The answer usually goes like so – think about how one can represent a signature of identity function – it can be λa:type.λx:a.x or λb:type.λx:b.x. The subtle part, they say, is that these two signatures not only not equal, they are not alpha-equivalent as type variables a and b are free variables inside their correspondent abstractions. So to overcome this pesky syntactic issue, we present П binder that plays nicely with alpha-conversion.

So the question: why is that? Why not just fix the notion of alpha-equivalence?

UPDATE z:

Oh, silly of me, λa:type.λx:a.x and λb:type.λx:b.x are alpha equivalent. But why a:type -> a -> a and b:type -> b -> b arent then.

UPDATE suc z:

Aha, interesting, I guess this is a perfect example of selective blindness =D

I am reading the book Type Theory and Formal Proof, and in the chapter about lambda2 author motivates the existence of П using exactly that kind of argumentation – one cant say that \t:*.\v:t.v : * -> t -> t because this makes two alpha-equivalent terms\t:*.\v:t.v and \g:*.\v:g.v have different types, as corresponding types are not alpha-equivalent, where types like t:* -> t -> t are in fact alpha-invariant. Mind the difference between t:* -> t -> t and * -> t -> t. But, doesn’t it make this argument a bit trivial, and is it even something meaningful to talk about type a -> b where a and b are unbound by any quantifiers variables. Andrej Bauer pointed out in the comments that П is indeed resembles a lambda abstraction with a few additional bells and whistles.

All in all, I am done with that one, thank you guys.

What does the notation $ [ i \neq k ] $ means?

I can’t figure out what the notation $ [x \neq k ]$ means. Here’s a bit of context:

The formula is: $ Pr[A_i^k = 1] = \frac{[i\neq k]}{|k-i| + 1} = \begin{cases} \frac{1}{k-i+1} \text{ if } i \lt k \ 0 \text { if } i = k \ \frac{1}{i-k+1} \text{ if } i \gt k \end{cases}$

and is part of a chapter where the average expected time of operations of a randomised treap are proved.

$ A_i^k$ is an indicator variable defined as $ [ x_i \text{ is a proper ancestor of }x_k ]$ where $ x_n$ is the node with the $ n$ -th smallest search key. That probability comes up because $ \text{depth}(x_k) = \sum_{i=1}^{n} A_i^k$ and $ \mathbf{E}[\text{depth}(x_k)] = \sum_{i=1}^nPr[A_i^k = 1]$ .

I have no access to the pages that explain the notation since I’m studying from a pdf of a few pages taken from a book.

Estimating the bit operations using big O notation

Using big- O notation estimate in terms of a simple function of $ n $ the number of bit operations required to compute $ 3^n$ in binary.

I need some help with the above question. The number of bit operations required to multiply two k- bit numbers is $ O(k^2)$ . In the first step I am multiplying two 2-bit numbers, in the 2nd step a 4-bit and a 2-bit number and so on. So the total bit operations will be I feel $ O(k^2) + O(k^2 * k) +…. + O(k^{n-1} * k) \,\,with \,\, k \,= 2 $

How will the above sum be estimated as a function of n?

Order notation subtractions in Fibonacci Heap

Can order notation on its own imply:

$ O(D(n)) + O(t(H)) – t(H) = O(D(n))$

My guess is that you cannot since the constant in the O(t(H)) would still exist after the subtraction if c > 1.

Well, this is actually the case, but there are underlying factors. This equation appears in Fibonacci heap analysis in CLRS (518). The justification for this step comes from the underlying potential function. According to the authors, “we can scale up the units of potential to dominate the constant hidden in $ O(t(H))$ “. I want to know how this happens, but don’t really know how to ask this complicated question.

Same computation order using postfix notation?

I’m trying to understand arithmetic using stacks. Specifically converting infix notation to postfix notation. My question is how you convert an expression like: 1 + (2 + 3) + (4 + 5) that computes in the exact order that the order of operations says.

Meaning: 1 + (2 + 3) + (4 + 5) becomes 1 + 5 + (4 + 5) then 1 + 5 + 9 then 6 + 9

If you do:

Push 1, Push 2, Push 3, Add, Push 4, Push 5, Add

Where Add pops the top two operands off the stack, adds them, and then pushes the sum back on the stack.

Alternative notation: 1 2 3 + 4 5 +

How do you add 1 + (sum of 2 and 3) next? If you do another Add then the sum of 2 and 3 will be added to sum of 4 and 5, because these are the top two on the stack. Is it impossible to do it in the exact same order? Doing each parentheses group first left to right then doing the rest of the computation left to right.

Is grammar that describes an equation in prefix (Polish) notation always unambiguous?

I recently completed a problem in which I was asked to generate a parse tree for the expression $ + \, 5 \, * \, 4 \, 3$ using the following grammar and rightmost derivation:

$ $ Expr \rightarrow + \, Expr \, Expr \, | \, * \, Expr \, Expr \, | \, 0 \, | \, \dots \, | \, 9 \,$ $

While I have no trouble taking the derivation and creating its parse tree, the question also asks whether the grammar is ambiguous. In the scope of what I’ve been taught, my only tool for proving ambiguity has been to find a different parse tree for whatever leftmost or rightmost derivation I have, thus proving multiple valid parses and ambiguity. However, I have not been told how to prove unambiguity. I am fairly confident that the grammar described above is unambiguous based partially on intuition, and partially because it’s designed for prefix notation. I tried to generate new trees for a given string to prove ambiguity, but since the operator is always leftmost, I could not find any string in which multiple parse trees could be created. If I am mistaken, please let me know.

My question is this: Is it possible for grammar that describes strings using prefix (Polish) notation such as the one above to ever be ambiguous? My intuition tells me that it will always be unambiguous, but I was wondering why this might be the case.