I am unable to understand the logic behind the code (I’ve added exact queries as comments in the code)

Our local ninja Naruto is learning to make shadow-clones of himself and is facing a dilemma. He only has a limited amount of energy (e) to spare that he must entirely distribute among all of his clones. Moreover, each clone requires at least a certain amount of energy to function (m) . Your job is to count the number of different ways he can create shadow clones. Example:


ans = 4

The following possibilities occur: Make 1 clone with 7 energy

Make 2 clones with 2, 5 energy

Make 2 clones with 3, 4 energy

Make 3 clones with 2, 2, 3 energy.

Note: <2, 5> is the same as <5, 2>. Make sure the ways are not counted multiple times because of different ordering.


int count(int n, int k){     if((n<k)||(k<1)) return 0;     else if ((n==k)||(k==1)) return 1;     else return count(n-1,k-1)+ count(n-k,k);   // logic behind this? }  int main() {     int e,m;            // e is total energy and m is min energy per clone     scanf("%d %d", &e, &m);     int max_clones= e/m;     int i,ans=0;     for(i=1;i<=max_clones;i++){         int available = e - ((m-1)*i);   // why is it (m-1)*i instead of m*i         ans += count(available, i);     }     return 0; } 

Books for learning about Digital logic, circuits, logic design etc

I am a computer science student and I have some courses names “Fundamental of electronics and digital systems, Logic design and switching circuits, System analysis and design “. I searched for books that may help me for these course and I found one named ” Digital logic and computer design by Mano”. I was wondering if anyone could suggest me some more books that will help me to master these topics. Thanks

Show that exist a finite set of clauses F in first-order logic that Res*(F) is infinite

I’m kind of desperate at this point about this question.

A predicate-logic resolution derivation of a clause $ C$ from a set of clauses $ F$ is a sequence of clauses $ C_1,\dots,C_m$ , with $ C_m = C$ such that each $ C_i$ is either a clause of $ F$ (possibly with the variables renamed) or follows by a resolution step from two preceding clauses $ C_j ,C_k$ , with $ j, k < i$ . We write $ \operatorname{Res}^*(F)$ for the set of clauses $ C$ such that there is a derivation of $ C$ from $ F$ .

The question is to give an example of a finite set of clauses $ F$ in first-order logic such that $ \operatorname{Res}^*(F)$ is infinite.

Any help would be appreciated!

Logic behind a single-tape NTM solving the TSP in $O({n}^4)$ time at most

I was going through the classic text “Introduction to Automata Theory, Languages, and Computation” by Hofcroft, Ullman, Motwani where I came across a claim that a “single-tape NTM can solving the TSP in $ O({n}^4)$ time at most” where $ n$ is the length of the input given to the turing machine (an instance of the TSP). The authors have assumed the encoding scheme as follows:

The TSP’s problem could be couched as: “Given this graph $ G$ and limit $ W$ , does $ G$ have a hamiltonian circuit of weight $ W$ or less?”


Let us consider a possible code for the graphs and weight limits that could be the input. The code has five symbols, $ 0$ , $ 1$ , the left and right parentheses, and the comma.

  1. Assign integers $ 1$ through $ m$ to the nodes.

  2. Begin the code with the value of $ m$ in binary and the weight limit $ W$ in binary, separated by a comma.

  3. If there is an edge between nodes $ i$ and $ j$ with weight $ w$ , place $ (i, j, w)$ in the code. The integers $ i$ , $ j$ , and $ w$ are coded in binary. The order of $ i$ and $ j$ within an edge, and the order of the edges within the code are immaterial. Thus, one of the possible codes for the graph of Fig. with limit $ W = 40$ is

$ 100, 101000(1, 10, 1111)(1, 11, 1010)(10, 11, 1100)(10, 100, 10100)(11, 100, 10010)$

The authors move to the claim as follows:

It appears that all ways to solve the TSP involve trying essentially all cycles and computing their total weight. By being clever, we can eliminate some obviously bad choices. But it seems that no matter what we do, we must examine an exponential number of cycles before we can conclude that there is none with the desired weight limit $ W$ , or to find one if we are unlucky in the order in which we consider the cycles.

On the other hand, if we had a nondeterministic computer, we could guess a permutation of the nodes, and compute the total weight for the cycle of nodes in that order. If there were a real computer that was nondeterministic, no branch would use more than $ O(n)$ steps if the input was of length $ n$ . On a multitape $ NTM$ , we can guess a permutation in $ O({n}^2)$ steps and check its total weight in a similar amount of time. Thus, a single-tape $ NTM$ can solve the TSP in $ O({n}^4)$ time at most.

I cannot understand the part of the above paragraph in bold. Let me focus on the points of my problem.

  1. What do they mean that the branch would take no more than $ O(n)$ ?
  2. On a multitape $ NTM$ , how can we guess a permutation in $ O({n}^2)$ steps ?
  3. How are we getting the final $ O({n}^4)$ ?

I neither get the logic nor the computation of the complexity as very little is written in the text.

Need help to understand the math behind the logic for scheduling problem using Reinforcement Learning

I am working on a problem for scheduling VMs considering efficient resource and energy utilisation and I came across this paper. I understand RL and how Q-Learning works which they are trying to use in paper. However, I am not able to achieve an intuitive understanding of the algorithm suggested (page 3).

I understand that equal importance has been given to utilisation and power consumption but with reverse, let’s say signs. But Step-3 is not intuitive. Can someone help me get a better understanding of the same algorithm?

Hoare’s Logic partial/total correctness

So a friend of mine does freelancing and he needed some help with a question about Hoare’s logic. He handed out the problem to me with a pretty narrow deadline. I had no idea what Hoare’s logic is so I looked up some videos on YouTube (channel name: COMP1600 Videos) and got some understanding of the topic. But looking at the question, I really have no idea what to do and where to begin. I have tried some stuff following the rules in the youtube video but I don’t think its any progress. Below is the problem he gave me:

Given the following program,

PV( int x, int y) {          while (true) {                  if (x <= 50)                          y++;                  else                          y--;                  if (y < 0)                          break;          x++;       }          assert(x == 102); }  

assume that the precondition is that x ≤ 50 ∧ x ≥ 0 ∧ y ≤ 50 ∧ y ≥ 0 ∧ x = y. Prove that assertion failure never occurs based on Hoare Logic. (a) Prove the partial correctness of the function.

(b) Prove the total correctness by additionally showing that the function is always terminating.

I am completely lost as what to do and where to begin.

I need help with my Logic (Resize an image)

Hey Guys!!

I'm trying to put some PHP code to resize an image.

Basically users upload an image. And that image gets placed on a PDF document but I need to limit how big (Pixels) that image is. If it's height is too large it will push the contents on to the 2nd page. Which I don't want.

Currently, I have this code which works to some extent but what I want is if the width of the picture is more than 80 that's totally ok as it doesn't push the page down.:

But then again I wanna…

I need help with my Logic (Resize an image)

Logic minimization via 2 inputs NOR gates: Is it monotone w.r.t to adding a minterm?

  • notation: $ x+y:=\mbox{OR}(x,y)$ , $ \bar x:=\mbox{NOT}(x)$ , $ xy:=\mbox{AND}(x,y)$ , 1:=TRUE, 0:=FALSE.

  • Let $ f$ be a Boolean function of $ n$ -variables, i.e. $ f: \{0,1\}^n \to \{0,1\}$ .

  • minterm:= any product (AND) of $ n$ literals (complemented or uncomplemented). e.g, $ x_1 \bar x_2 x_3 $ is a minterm in 3 variables

  • $ \mbox{NOR2}(f)$ is the minimum number of 2-input NOR gates required to represent a given function $ f$ . For instance, $ \mbox{NOR2}(x_1 x_2)=3$ .

Let $ f_1= m_1, f_2=m_2$ , where $ m_1, m_2$ are minterms that are co-prime (i.e, $ f_1+f_2$ can’t be minimized further. In other words, $ m_1,m_2$ are prime implicants of $ f_1+f_2$ ). For instance, $ x_1 \bar x_2 x_3 $ and $ x_1 x_2 \bar x_3 $ are co-prime

Then, is the following true? $ $ \mbox{NOR2}(f_1+f_2)\ge \mbox{max}\{ \mbox{NOR2}(f_1), \mbox{NOR2}(f_2) \}$ $

[i.e, adding two coprime minterms can’t yield a 2-input NOR circuit with fewer gates]

I think it is true but I can’t think of a proof. Any ideas on how to start proving it?

Using conditional logic on fields in gravity form based on Paid Memberships Pro membership level

I’m trying to use conditional logic to hide/show fields in my gravity form based on user’s membership level determined by Paid Memberships Pro. I’ve tried all different variations using {user:membership_level} in a hidden field. Then, using conditional logic on selected fields, I’ve tried to hide/show based on the value of the membership level.

Any help would be appreciated. Is my meta key ({user:membership_level}) incorrect?

What is the logic or fallacy behind the Perpetual Power Point trick?

The Perpetual Power Point trick uses two feats.

The first is Azure Talent (which grants 1:2 ratio of incarnum in for double the power points out) per point of incarnum invested. When essentia is invested, the feat locks for the day, as usual for an incarnum receptacle.

The second is Psycarnum Infusion (which allows one to expend psionic focus in exchange for treating one incarnum receptacle as if it had maximum incarnum until the beginning of your next turn.

The idea is to then refocus and repeat, probably with the Mediation feat to reduce the time.

In theory, this means a small but almost perpetual supply of power points.

Thus, what is the logic (it works) or fallacy (it doesn’t) behind this Perpetual Power Point trick?