Aura of the Guardian and Damage order of operations

The Oath of Redemption Paladin subclass gets the following feature at 7th level (XGtE, 39):

you can shield your allies from harm at the cost of your own health. When a creature within 10 feet of you takes damage, you can use your reaction to magically take that damage, instead of that creature taking it. This feature doesn’t transfer any other effects that might accompany the damage, and this damage can’t be reduced in any way.

It is clear that the Paladin cannot reduce the damage they take but it is unclear whether the damage the the initial target would have taken could be reduced. Is the rule that the damage cannot be reduced from its original damage roll or that it cannot be reduced from what the target would have taken?

For example:

Farla the fighter has the feat Heavy Armor Master which grants:

While you are wearing heavy armor, bludgeoning, piercing, and slashing damage that you take from nonmagical weapons is reduced by 3.

Farla is under the effect of Warding Bond which grants her resistance to all damage.

Farla is hit by a Storm Giant’s greatsword which rolls 30 damage.

Heavy Armor Master reduces this to 27 which is then reduced to 13 by resistance. Unfortunately Farla only has 10 hp remaining so Psi the Paladin uses Aura of the Guardian to take the damage instead.

How much damage does Psi take?

Looking for an assistant that understands basic server operations

Hi All,

I'm looking to hire an assistant on an hourly basic that understands the basics of windows or linux servers. Also someone who is willing to learn. This would involve:

• writing or re-writing content I give you with direction
• forum posting on forums I give you
• Supporting live chat of my web hosting
• Also fixing very basic server issues

I am posting here as this is the kind of crowd I'd like this person to be from.
Please PM me if you are interested and I will…

Looking for an assistant that understands basic server operations

Easily create LinkedIn bots and automate operations.

Hey all,

I want to share with you a new GitHub project. This is a NodeJS API wrapper for LinkedIn unofficial API.

This project helps developers building some cool LinkedIn bots/services.
All you need is a working LinkedIn account and some basic knowledge of Javascript/NodeJS/TypeScript

Those are the features my API provides (so far):
* Search for people, companies, and connections
* View profiles
* View sent and received invitations and send new invitations to any profile.
* Navigate…

Easily create LinkedIn bots and automate operations.

Does commit after a normal select can call any fsync or flush operations?

Coms counts taken at particular time for all 3 (insert,update,commit).

Com_inserts are in 1.1K  Com_updates are in 1.6k  Com_commit are in  10K 

When enabling general log, I could see after each selects there is a commit. Would this unnecessary commit can cause any fsync or any flush opertation to the server?

innodb_flush_log_at_txn_commit=0. 

Bitwise operations in FHE

Im reading about FHE and the libraries implementing it (SEAL, HELib). I saw that SEAL doesn’t support bitwise operations but I wondered if its theoretically feasible. For example, bitwise-ing XOR an encrypted value with itself, gets us an encrypted 0. Bitwise it with the not of itself to get an encrypted 1. Using shifts with the computed two I would then be able to extract the encrypted number. Shift right/left can be made using multiplication or division by (powers of) 2. But XOR-ing is the main problem. Is it theoretically possible under any FHE/HE scheme? What are the limitations? Thanks

Description

Suppose we have a string containing letters ‘A’,’B’,’C’,’D’, and the characters are placed in a stack.We also have an empty stack.Ultimately,we want all letters grouped in the 2nd stack,using only 3 operations:

• push("p"): Removes an items from the bottom of the 1st stack and place it to the top of the 2nd
• complement("c"): Replace every all letters of the 1st stack with they "complements".The pairs are A – B and C – D
• reverse("r"): Reverse the content of the 2nd stack.The top becomes bottom and bottom->top.

Example of moves

| Move | First Stack | Second Stack | +------+-------------+--------------+ |      | DBACA       |              | +------+-------------+--------------+ | p    | DBAC        | A            | +------+-------------+--------------+ | p    | DBA         | CA           | +------+-------------+--------------+ | r    | DBA         | AC           | +------+-------------+--------------+ | p    | DB          | AAC          | +------+-------------+--------------+ | c    | CA          | AAC          | +------+-------------+--------------+ | p    | C           | AAAC         | +------+-------------+--------------+ | r    | C           | CAAA         | +------+-------------+--------------+ | p    |             | CCAAA        | +------+-------------+--------------+ 

Note that the example above finds a solution,but not the minimum solution.The correct answer would be "ppr ppp"

Correct examples

Spaces in the sequence have no meaning and are added for readability purposes.

+------------------------+-------------------------------------+ | First Stack (input)    | Moves (output)                      | +------------------------+-------------------------------------+ | DD                     | pp                                  | +------------------------+-------------------------------------+ | BADA                   | ppr pp                              | +------------------------+-------------------------------------+ | DADA                   | ppc pp                              | +------------------------+-------------------------------------+ | DBACA                  | pprppp                              | +------------------------+-------------------------------------+ | BDA CACA               | ppr prp rppp                        | +------------------------+-------------------------------------+ | CAC DCDC               | pcp cpc pcp cpp                     | +------------------------+-------------------------------------+ | ADA DBD BCB DBCB       | ppr pcr pcr prp rpr prp rpr prp rp  | +------------------------+-------------------------------------+ | DAB BCC DCC BDC ACD CC | ppc pcp cpp rpp rpp cpc ppr ppc prp | +------------------------+-------------------------------------+ 

Brute force approach

We could just use brute force approach,calculating all possible moves until the first stack is empty.This could be done using BFS or A* algorithms.

For example,we could initialize an empty queue,start from a parent node and create 3 new nodes for every possible move.Then add these nodes to the queue.Every time remove a node from the queue and apply the operations.Save the sequence of moves while nodes are created.If the last move was a "c",then skip "c" operation for this node.The same is true about "r" operation (no repetitive c’s or r’s).If stack1 = empty for a node,then finish the program and return the sequence of moves.

Questions

Is there a better way to solve this problem? Can we apply some heuristics as improvement in the brute force approach? Thank you in advance.

Intuition behind the entire concept of Fibonacci Heap operations

The following excerpts are from the section Fibonacci Heap from the text Introduction to Algorithms by Cormen et. al

The potential function for the Fibonacci Heaps $$H$$is defined as follows:

$$\Phi(H)=t(H)+2m(H)$$

where $$t(H)$$ is the number of trees in the root list of the heap $$H$$ and $$m(H)$$ is the number of marked nodes in the heap.

Before diving into the Fibonacci Heap operations the authors try to convince us about the essence of Fibonacci Heaps as follows:

The key idea in the mergeable-heap operations on Fibonacci heaps is to delay work as long as possible. There is a performance trade-off among implementations of the various operations.($$\color{green}{\text{I do not get why}}$$) If the number of trees in a Fibonacci heap is small, then during an $$\text{Extract-Min}$$ operation we can quickly determine which of the remaining nodes becomes the new minimum node( $$\color{blue}{\text{why?}}$$ ). However, as we saw with binomial heaps, we pay a price for ensuring that the number of trees is small: it can take up to $$\Omega (\lg n)$$ time to insert a node into a binomial heap or to unite two binomial heaps. As we shall see, we do not attempt to consolidate trees in a Fibonacci heap when we insert a new node or unite two heaps. We save the consolidation for the $$\text{Extract-Min}$$ operation, which is when we really need to find the new minimum node.

Now the problem which I am facing with the text is that they dive into proving the amortized cost mathematically using the potential method without going into the vivid intuition of the how or when the "credits" are stored as potential in the heap data structure and when it is actually used up. Moreover in most of the places what is used is "asymptotic" analysis instead of actual mathematical calculations, so it is not quite possible to conjecture whether the constant in $$O(1)$$ for the amortized cost ( $$\widehat{c_i}$$ ) is greater or less than the constant in $$O(1)$$ for the actual cost ($$c_i$$) for an operation.

$$\begin{array}{|c|c|c|} \hline \text{Sl no.}&\text{Operation}&\widehat{c_i}&c_i&\text{Method of cal. of \widehat{c_i} }&\text{Cal. Steps}&\text{Intuition}\ \hline 1&\text{Make-Fib-Heap}&O(1)&O(1)&\text{Asymptotic}&\Delta\Phi=0\text{ ; \widehat{c_i}=c_i=O(1) } &\text{None}\ \hline 2&\text{Fib-Heap-Insert}&O(1)&O(1)&\text{Asymptotic}&\Delta\Phi=1 \text{ ; \widehat{c_i}=c_i=O(1)+1=O(1) } &\text{None}\ \hline 3&\text{Fib-Heap-Min}&O(1)&O(1)&\text{Asymptotic}&\Delta\Phi=0;\text{ ; \widehat{c_i}=c_i=O(1) } &\text{None}\ \hline 4&\text{Fib-Heap-Union}&O(1)&O(1)&\text{Asymptotic}&\Delta\Phi=0;\text{ ; \widehat{c_i}=c_i=O(1) } &\text{None}\ \hline 5&\text{Fib-Extract-Min}&O(D(n))&O(D(n)+t(n))&\text{Asymptotic}&\Delta\Phi=D(n)-t(n)+1 &\text{ \dagger }\ \hline 6&\text{Fib-Heap-Decrease-Key}&O(1)&O(c)&\text{Asymptotic}&\Delta\Phi=4-c &\text{ \ddagger }\ \hline \end{array}$$

$$\dagger$$ – The cost of performing each link is paid for by the reduction in potential due to the link’s reducing the number of roots by one.

$$\ddagger$$ – Why the potential function was defined to include a term that is twice the number of marked nodes. When a marked node $$у$$ is cut by a cascading cut, its mark bit is cleared, so the potential is reduced by $$2$$. One unit of potential pays for the cut and the clearing of the mark bit, and the other unit compensates for the unit increase in potential due to node $$у$$ becoming a root.

Moreover the authors deal with a notion of marking the nodes of Fibonacci Heaps with the background that they are used to bound the amortized running time of the $$\text{Decrease-Key}$$ or $$\text{Delete}$$ algorithm, but not much intuition is given behind their use of it. What things shall go bad if we do not use markings or use $$\text{Cacading-Cut}$$ when the number of children lost from a node is not just $$2$$ but possibly more. The excerpt corresponding to this is as follows:

We use the mark fields to obtain the desired time bounds. They record a little piece of the history of each node. Suppose that the following events have happened to node $$x$$:

1. at some time, $$x$$ was a root,
2. then $$x$$ was linked to another node,
3. then two children of $$x$$ were removed by cuts.

As soon as the second child has been lost, we cut $$x$$ from its parent, making it a new root. The field $$mark[x]$$ is true if steps $$1$$ and $$2$$ have occurred and one child of $$x$$ has been cut. The Cut procedure, therefore, clears $$mark[x]$$ in line $$4$$, since it performs step $$1$$. (We can now see why line $$3$$ of $$\text{Fib-Heap-Link}$$ clears $$mark[y]$$: node $$у$$ is being linked to another node, and so step $$2$$ is being performed. The next time a child of $$у$$ is cut, $$mark[y]$$ will be set to $$\text{TRUE}$$.)

Strictly I do not get the intuition behind the $$mark$$ in the above block text especially the logic of doing the stuff in bold-italics.

[EDIT: The intuition of why to use the marking in the way stated was made clear to me by the lucid answer here, but I still do not get the cost benefit which we get using markings]

Note: It is quite a difficult question in the sense that it involves the description the problem which I am facing to understand the intuition behind the concept of Fibonacci Heap operations which is in fact related to an entire chapter in the CLRS text. If it demands too much in a single then please do tell me then I shall split it accordingly into parts. I have made my utmost attempt to make the question the clear. If at places the meaning is not clear, then please do tell me then I shall rectify it. The entire corresponding portion of the text can be found here. (Even the authors say that it is a difficult data structure, having only theoretical importance.)

Count number of ways in which atomic operation(s) of n different processes can be interleaved

PROBLEM: Count the number of ways in which atomic operation(s) of n different processes can be interleaved. A process may crash mid way before completion.

Suppose there are a total of n different processes – P1, P2, P3 …. , Pn.

Each process can have a variable number of atomic operation(s) that constitutes that process, but it should have at least one operation.

EXAMPLE

Consider two processes, P1 and P2

• P1: 1o1; 1o2; 1o3; 1o4; 1o5; 1o6;
• P2: 2o1; 2o2; 2o3;

where 1o1 denotes first operation of process P1.

Attempt:

Fix position of all operations of process P1, then count the number of ways in which the operations of process P2 can be placed in empty positions( __ ) created between operations of process P1, as shown below:

__ 1o1 __ 1o2 __ 1o3 __ 1o4 __ 1o5 __ 1o6 __

There are seven empty positions numbered 1 to 7.

Counting: (Note that the numbers below (like 1 2 3) denote the empty position number.)

> Case1: When all three operations of P2 are placed in consecutive empty positions.    1 2 3    2 3 4    3 4 5    4 5 6    5 6 7    We have a total of 5 ordering possible for empty positions.   > Case2: When operations of P2 are placed in two consecutive empty positions taken together.    1 2 3   2 3 4   3 4 5   4 5 6   5 6 7    1 2 4   2 3 5   3 4 6   4 5 7    1 2 5   2 3 6   3 4 7    1 2 6   2 3 7    1 2 7    First cell in every column has already been counted in previous case. We have a total   of (5 - 1) + (4 - 1) + (3 - 1) + (2 - 1) + (1 - 1) = 10 ordering possible for empty    positions.    A similar argument can be made for last two consecutive empty positions taken together,   that gives us a total of another 10 ordering possible for empty positions.   > Case3: These are those cases that do not have empty positions numbered 8 and 9 for them.    6 7 8    7 8 9  > Case4: When operations may crash mid way before completion.   An 'x' denotes position where a crash is possible and process (here P2) terminates.    1x 2x 3    2x 3x 4    3x 4x 5    4x 5x 6    5x 6x 7    6x 7x 8    7x 8x 9    There is a total of 14 'x's possible.      Note: I have not put a cross after last empty position number because I am assuming that   a process will complete at this stage. You may correct my assumption if this is   wrong and should not be assumed in the first place. 

Adding all 4 cases: 5 + 2*10 + 2 + 14 = 41. There are 41 possible ways to interleave operations processes P1 and P2.

As you can see, counting like this is cumbersome and error prone. I have missed cases.

How can this counting problem be generalised? Please see the problem statement at the top of the question.

Find the probability of occurrence of each edge in a graph having $n$ nodes after $k$ operations

Given a graph with $$n$$ nodes . The adjacency matrix of the graph is given. We do $$k$$ operations on this graph. In each operation we choose two distinct nodes $$i$$ and $$j$$ (there are $$(n*(n-1))/2$$ such choices) equiprobably (all choices are equiprobable) and if there exists a edge between $$i$$ and $$j$$ we delete that edge or else we draw an edge between the chosen pair of nodes.
We have to output a $$n*n$$ matrix where the element at $$(i,j)th$$ position denotes the probability of occurrence of edge connecting the nodes $$i$$ and $$j$$ in the final resulting graph.
The constraints on n and k are $$n<=50$$ and $$k<=50$$ .
I tried it using dynamic programming but could figure out transitions properly.
Using big- O notation estimate in terms of a simple function of $$n$$ the number of bit operations required to compute $$3^n$$ in binary.
I need some help with the above question. The number of bit operations required to multiply two k- bit numbers is $$O(k^2)$$. In the first step I am multiplying two 2-bit numbers, in the 2nd step a 4-bit and a 2-bit number and so on. So the total bit operations will be I feel $$O(k^2) + O(k^2 * k) +…. + O(k^{n-1} * k) \,\,with \,\, k \,= 2$$