Count number of ways in which atomic operation(s) of n different processes can be interleaved

PROBLEM: Count the number of ways in which atomic operation(s) of n different processes can be interleaved. A process may crash mid way before completion.

Suppose there are a total of n different processes – P1, P2, P3 …. , Pn.

Each process can have a variable number of atomic operation(s) that constitutes that process, but it should have at least one operation.


Consider two processes, P1 and P2

  • P1: 1o1; 1o2; 1o3; 1o4; 1o5; 1o6;
  • P2: 2o1; 2o2; 2o3;

where 1o1 denotes first operation of process P1.


Fix position of all operations of process P1, then count the number of ways in which the operations of process P2 can be placed in empty positions( __ ) created between operations of process P1, as shown below:

__ 1o1 __ 1o2 __ 1o3 __ 1o4 __ 1o5 __ 1o6 __

There are seven empty positions numbered 1 to 7.

Counting: (Note that the numbers below (like 1 2 3) denote the empty position number.)

> Case1: When all three operations of P2 are placed in consecutive empty positions.    1 2 3    2 3 4    3 4 5    4 5 6    5 6 7    We have a total of 5 ordering possible for empty positions.   > Case2: When operations of P2 are placed in two consecutive empty positions taken together.    1 2 3   2 3 4   3 4 5   4 5 6   5 6 7    1 2 4   2 3 5   3 4 6   4 5 7    1 2 5   2 3 6   3 4 7    1 2 6   2 3 7    1 2 7    First cell in every column has already been counted in previous case. We have a total   of (5 - 1) + (4 - 1) + (3 - 1) + (2 - 1) + (1 - 1) = 10 ordering possible for empty    positions.    A similar argument can be made for last two consecutive empty positions taken together,   that gives us a total of another 10 ordering possible for empty positions.   > Case3: These are those cases that do not have empty positions numbered 8 and 9 for them.    6 7 8    7 8 9  > Case4: When operations may crash mid way before completion.   An 'x' denotes position where a crash is possible and process (here P2) terminates.    1x 2x 3    2x 3x 4    3x 4x 5    4x 5x 6    5x 6x 7    6x 7x 8    7x 8x 9    There is a total of 14 'x's possible.      Note: I have not put a cross after last empty position number because I am assuming that   a process will complete at this stage. You may correct my assumption if this is   wrong and should not be assumed in the first place. 

Adding all 4 cases: 5 + 2*10 + 2 + 14 = 41. There are 41 possible ways to interleave operations processes P1 and P2.

As you can see, counting like this is cumbersome and error prone. I have missed cases.

How can this counting problem be generalised? Please see the problem statement at the top of the question.

Find the probability of occurrence of each edge in a graph having $n$ nodes after $k$ operations

Given a graph with $ n$ nodes . The adjacency matrix of the graph is given. We do $ k$ operations on this graph. In each operation we choose two distinct nodes $ i$ and $ j$ (there are $ (n*(n-1))/2$ such choices) equiprobably (all choices are equiprobable) and if there exists a edge between $ i$ and $ j$ we delete that edge or else we draw an edge between the chosen pair of nodes.
We have to output a $ n*n$ matrix where the element at $ (i,j)th$ position denotes the probability of occurrence of edge connecting the nodes $ i$ and $ j$ in the final resulting graph.
The constraints on n and k are $ n<=50$ and $ k<=50$ .
I tried it using dynamic programming but could figure out transitions properly.
Can you please help me out.

Estimating the bit operations using big O notation

Using big- O notation estimate in terms of a simple function of $ n $ the number of bit operations required to compute $ 3^n$ in binary.

I need some help with the above question. The number of bit operations required to multiply two k- bit numbers is $ O(k^2)$ . In the first step I am multiplying two 2-bit numbers, in the 2nd step a 4-bit and a 2-bit number and so on. So the total bit operations will be I feel $ O(k^2) + O(k^2 * k) +…. + O(k^{n-1} * k) \,\,with \,\, k \,= 2 $

How will the above sum be estimated as a function of n?

What is the suitable file structure for database if queries are select (relational algebra) operations only?

Searches related to A relation R(A, B, C, D) has to be accessed under the query σB=10(R). Out of the following possible file structures, which one should be chosen and why? i. R is a heap file. ii. R has a clustered hash index on B. iii. R has an unclustered B+ tree index on (A, B).

Can an $NDTM$ simultaneously perform a set of operations on all strings of a given length?

Can an $ NDTM$ perform a set of operations on all strings of a given length $ b$ , at the same time? Aka can it operate on all strings of a given length by doing something like: spawn $ 2^b$ branches then operate on each string of length b on each branch?

How could it do this tho if the branches can’t communicate? That’s what I’m having a hard time with. How does any given branch, if it doesn’t know what strings the other branches are running, know what string to run the operations on (so that all the strings are covered by $ 2^b$ branches)?

Warding Bond: What is the Order of Operations for calculating cleric damage taken?

The warding bond spell allows a support character (cleric) to buff another creature (Basic Rules p. 105):

While the target is within 60 feet of you, it gains a +1 bonus to AC and saving throws, and it has resistance to all damage. Also, each time it takes damage, you take the same amount of damage.

What is the Order of Operations used to arrive at the final damage that the cleric receives as his share?

Resistance rules brought this up (Basic Rules, p. 75) as we hoped to Ward a raged Barb – MEGA Tank – until our DM reminded us that you don’t double stack like resistances (PHB, p. 197).

If a creature or an object has resistance to a damage type, damage of that type is halved against it.

I’ll be buffing our Paladin.
Two cases:

  1. Weapon damage (a hammer blow, a claw strike, gored by a gorgon …)
  2. Damage where a saving throw versus magical damage (usually a spell or spell like effect) is required.

Case 1. I stay 30′ behind Paladin. Giant scores a hit, doing 16 points of bludgeoning damage. Warding Bond (resistance) reduces that to 8. I take 8 HP.

Case 2. The Wizard whom the Giant serves fireballs the Paladin on his action. (I am outside blast radius). Rolled damage is 24, Fire. Paladin has resistance to all damage (from Bond) thus 24 is halved to 12. He rolls a saving throw, and succeeds with a 17. He takes 6 damage. I take 6 damage.

My view is that we can’t assign damage to Cleric until we know total damage to Paladin.

Paladin and I weren’t sure about the case of fireball: should it be different from the Giant’s hammer, since you don’t get a save versus melee weapon damage? With fireball reduced damage (from resistance) at 12, do I take 12 unless I too save versus fire as the Paladin did?

I don’t think so. It seems to violate the KISS principle. But, the Bond ties the cleric magically to the Paladin. Is magic going to follow that path of least resistance?

Do I have the order of operations right?

  1. First resolve all damage to Paladin.
  2. Then apply that amount to Cleric.

Is there something we missed that would support the other order of operations?

Implementing Queue operations in $\Theta(1)$

We want to implement a Queue which has two special operations besides the regular Queue operations: $ getMiddle$ (returns the element from the middle of the Queue, for example, if the Queue has 7 elements, it returns the 4th element, if the Queue has 6 elements, it returns the 3rd or the 4th) and $ popMiddle$ (removes the middle element). We want both of these operations and all the other Queue operations to run in $ \theta(1)$ . Pick a suitable data structure for representing the Queue, implement the $ popMiddle$ , pop and push operations and explain in short how the other operations would be implemented.

I was thinking that the linked list would be a good pick if we keep in mind also the tail. But there is a problem when popping the middle because we have to iterate until there. An array would not be good because when erasnig we have to move all the elements to the left, if not there is no suitable formula to find the middle. Does somebody have better ideas?

Order of operations for Aurora property Lasers

Laser weapons cannot do damage to invisible creatures. The Aurora property negates invisibility for 1 minute on a hit. At the moment I am assuming that a) you can still make attacks against an invisible target with a laser and b) it will still apply non-damage effects (correct me if these are incorrect). This would mean that a laser with aurora (such as via a Mechanic’s prototype weapon) would still negate invisibility for future shots.

If I fire a laser weapon with the Aurora property, does the (lack of) damage occur before their invisibility is removed, or after?

Relevant rules text:

Laser weapons emit highly focused beams of light that deal fire damage. These beams can pass through glass and other transparent physical barriers, dealing damage to such barriers as they pass through. Barriers of energy or magical force block lasers. Invisible creatures don’t take damage from lasers, as the beams pass through them harmlessly. Fog, smoke, and other clouds provide both cover and concealment from laser attacks. Lasers can penetrate darkness, but they don’t provide any illumination.

When an aurora weapon strikes a target, the creature glows with a soft luminescence for 1 minute. This negates invisibility effects and makes it impossible for the target to gain concealment from or hide in areas of shadow or darkness.

Optimising tensor operations under memory constraints

Let riem is a free variable with riem ∈ Arrays[{4, 4, 4, 4} as assumption. Let:

val = TensorConstract[TensorProduct[riem, riem, riem], {{4,5}}]

Let riemVals be an actual {4, 4, 4, 4} tensor whose indices have symbolic values.

I’m interested in computing val /. (riem-> riemVals). I’m “guessing” there are two ways Mathematica could do this internally:

1) Compute v1 =TensorProduct[riemVals, riemVals, riemVals] then compute the result as TensorConstract[v1,{{4,5}}].

2) Note that val is equivalent to:

TensorProduct[TensorConstract[TensorProduct[riem, riem], {{4,5}}],riem].

Compute v1= TensorProduct[riemVals, riemVals]. Then v2= TensorConstract[v1,{{4,5}}]. Then the result as TensorProduct[v1, riemVals].

Now, what’s the difference between these two? Obviously they give us the same result, but in the first approach we have to store a $ 4^{12}$ tensor in memory as intermediate value, while in the second one we only have to store a $ 4^{10}$ tensor. The idea being that, when your maximum memory is constrained, it pays off to move TensorConstract inwards to the expression so you can do it as earlier as possible before you do the TensorProduct.

My question is: does Mathematica take the memory-efficient approach when doing these types of operations? If not, is there any way to implement the evaluation/computation in a controlled manner such that the result will be calculated in a memory-efficient way (compute and prioritize forms where the TensorCotract is made early)?

How is a symbol “given meaning by a family of operations indexed by symbols”?

Practical Foundation of Programming Languages by Harper says:

Chapter 31 Symbols

A symbol is an atomic datum with no internal structure. Whereas a variable is given meaning by substitution, a symbol is given meaning by a family of operations indexed by symbols. A symbol is just a name, or index, for a family of operations.

Many different interpretations may be given to symbols according to the operations we choose to consider, giving rise to concepts such as fluid binding, dynamic classification, mutable storage, and communication channels.

A type is associated to each symbol whose interpretation depends on the particular application. For example, in the case of mutable storage, the type of a symbol constrains the contents of the cell named by that symbol to values of that type.

What does “a symbol is given meaning by a family of operations indexed by symbols” mean? Is “a symbol” given meaning by a family of operations not one of the “symbols” indexing the family of operations? What is the relation between “a symbol” and “symbols”?

What does “a symbol is just a name, or index, for a family of operations” mean? Does it mean “a symbol names or indexes a family of operations”?

When a symbol is used in each of the following example cases (which I hope you could consider as many as possible, in particular the first three cases):

  • “represent a variable in symbolic representations of equations or programs” (see the quote below),
  • “represent a word in the representation of natural language sentences” (see the quote below),
  • represent an assignable (?) in mutable storage,
  • represent something (something similar to a variable?) in fluid binding,
  • represent a class (?) in dynamic classification,
  • represent something (?) in communication channels,

how does the above quote about a symbol applies, specifically:

  • is the symbol given meaning by what family of operations indexed by symbols?
  • is the symbol just a name, or index, for what family of operations?


The Scheme Programming Language, 4th Edition, by Dybvig, says

Section 2.2. Simple Expressions

Symbols and variables in Scheme are similar to symbols and variables in mathematical expressions and equations. When we evaluate the mathematical expression 1 – x for some value of x, we think of x as a variable. On the other hand, when we consider the algebraic equation x 2 – 1 = (x – 1)(x + 1), we think of x as a symbol (in fact, we think of the whole equation symbolically).

While symbols are commonly used to represent variables in symbolic representations of equations or programs, symbols may also be used, for example, as words in the representation of natural language sentences.