In Ghost Ops do NPCs get free attacks only on total Bullet Time failure or also on partial failure?

I have the original version of Ghost Ops (which uses Fudge dice), not the Savage Worlds version or the OSR version. This question is about that original version, but if you think the rules in one of the other versions can throw some light on this, please chip in.

On page 132 of the core rulebook there is an example of a failed Bullet Time action. The PC was attempting to shoot 3 NPCs in the head, and needed an 8 but only got a 6.

The book then has some more rules:

The Handler can decide that the Operator succeeded in some of the attempt. Maybe they barged the door and managed to get 2 of the attempted headshots off but missed the third. Failing a Bullet Time event places the Operator as prone for 1 round, allowing any Tangos free attacks. Deciding to attempt Bullet Time is risky but can be ultimately rewarding.

So, if the GM has said the failed roll can be partial success (hit 2 of the NPCs) and partial failure (miss the 3rd NPC), which of these applies?

  1. It still counts as a normal fail – the PC is prone and subject to a free attack by all three NPCs (assuming the two he shot aren’t dead or disabled).
  2. It still counts as a ‘reduced’ fail – the PC is prone but only the third NPC, who was not hit, gets a free attack.
  3. It counts as a success – the PC is not prone and the NPC/s don’t get free attacks.
  4. The GM decides on a case by case basis.

I’m hoping there is clarification for this question in one of the expansions, or in an updated version of the pdf (I only have a print copy). I’ve failed to find any errata on the internet.

Termination of term rewiting using strict partial order on subterms

Are there any good books, research reports, surveys, theses, or papers that display proof techniques, with clear proofs of termination of term rewriting problems that have the following form…?

Terms are represented by directed acyclic graphs where the terms are vertices with arcs labelled $ arg_{1}…arg_{n}$ pointing to the immediate sub-terms. There would be additional equality arcs between vertices. Thinking of the transitive closure of the equality arcs as representing equivalence classes of vertices that are "merged", the $ arg$ arcs in the graph form a lattice (because or the strict order on sub-terms, and some sub-terms might be shared). A rewrite rule would add extra arcs, such that existing partial order would be preserved and added to, so the rewrite rules would be constructing a series of partial orders (represented in the graph state at each step) $ p_{0} \subset … \subset p_{m}$ more and more "constraining" the partial order relation between vertices until either the re-write rules find nothing to re-write or a rewrite would introduce a cycle (immediately detectable by a depth first search). I think this kind of termination proof is correct because we can say every step was a reduction under the partial order $ p_{m}$ but I’d like a formal justification because I have worries about my not knowing $ p_{m}$ before hand, only when it is constructed. And if the rewrite finds a cycle then that cycle was implicit from the beginning. Again I think that’s OK because my re-write rules are prove-ably LHS iff RHS so they transform the problem to an equivalent problem. I call this "construct a partial order or die trying." Is there a more formal name for this kind of proof?

Ideally the proof examples would be constructive and mathematically thorough. I see some papers that assume a lot of prior knowledge, probably because of brevity requirement, and not wanting to bore an expert audience. And others with "wordy" explanations, which are great to give intuitive understanding, but proofs should not depend on them.

Can partial Turing completeness be quantified as a subset of Turing-computable functions?

Can partial Turing completeness be coherently defined this way:
An an abstract machine or programming language can be construed as Turing complete on its computable subset of Turing-computable functions.

In computability theory, several closely related terms are used to describe the computational power of a computational system (such as an abstract machine or programming language):

Turing completeness A computational system that can compute every Turing-computable function is called Turing-complete (or Turing-powerful).

annotations for partial correctness

I am reading about hoare login but I don’t really understand the annotations part. This is what I know about annotations:

The inserted annotation is expected to be true whenever the execution reaches the point of the annotation 

This function f(x,n) calculates x^n

enter image description here

For the given code, the answers for annotations are the following:

before while:

(K >0) ^ (Y x P^K = X^n) 

after while statement:

(K > 0) ^ (Y x P^K = X^n) 

I understand the K>0 part but I don’t get why we’re using the second part.

How to show that a partial function is recursive?

I try to prove that this function is recursive: $ $ f(x_1,x_2)= \begin{cases} 2x_1-x_2 & \text{if $ x_1 \geqslant \sqrt{x_2}$ } \newline \bot & \text{otherwise} \end{cases}$ $

I think that I need to use minimization operator but I don’t know how to do that. $ \qquad $ $ \qquad $ $ \qquad$ Maybe i have to prove that $ $ \mu y(|x_1-\sqrt{x_2}| = 0)$ $ ?

Why can’t I solve a system of four third-order partial differential equations using NDSolve?

I solve the problem of optimal control. First, I solve a system of two equations (with respect to the variables p, q, V, n) with already optimal conditions (Kopt and Sopt) to check whether it is possible to solve such a system at all. It turns out that everything is OK, but the non-linear part of the equation cannot be discarded, it is important. I am using the NDSolve function with the StiffnessSwitching specification:

eq1 = D[n[x, t], t] + D[n[x, t]*V[x, t], x] == 0; eq2 = D[V[x, t], t] + Kopt[t]*x -      2*(2*250^2)*Sopt[t]*Sin[k*x]*Cos[k*x]*k + V[x, t]*D[V[x, t], x] +      g*D[n[x, t], x] -      D[ D[Sqrt[n[x, t]], x, x]/(2*Sqrt[n[x, t]]), x] == 0;  eqo1 = D[n[x, t], t] + D[n[x, t]*V[x, t], x] == 0; eqo2nl = D[V[x, t], t] + Kopt[t]*x -      8*(2*250^2)*Sopt[t]*Sin[k*x]*Cos[k*x]*k + V[x, t]*D[V[x, t], x] +      g*D[n[x, t],        x] - ((D[n[x, t], x])^3 -         2*n[x, t]*D[n[x, t], x]*D[n[x, t], {x, 2}] +         D[n[x, t], {x, 3}]*(n[x, t])^2)/(4*(n[x, t])^3) == 0;     cond1 = n[x, t] == ((Sqrt[Kzero]/Pi)^(1/2))*      Exp[-0.5*Sqrt[Kzero]*x^2] /. t -> 0; cond2 =  n[x, t] == 0. /. x -> -10; cond3 =  n[x, t] == 0. /. x -> 10;  cond4 = V[x, t] == 0. /. t -> 0; cond5 = V[x, t] == 0. /. x -> -10; cond6 = V[x, t] == 0. /. x -> 10;   sol5 = NDSolve[{eqo1, eqo2nl, cond1, cond2, cond3, cond4, cond5,      cond6}, {V, n}, {x, -10, 10}, {t, 0, Tv},     Method -> {"StiffnessSwitching",       Method -> {"ExplicitRungeKutta", Automatic}}, AccuracyGoal -> 1,     PrecisionGoal -> 1]; 

Next, I solve four equations in order to find the control parameter K (depends on q), while Sopt is still known. The initial conditions are the same and as banal as possible. Add two more equations and K[t] formula:

K[t_] := -1*Integrate[q[x, t]*x^2, {x, -10, 10}];  eqo3 = D[q[x, t], t] == - n[x, t]*D[p[x, t], x]  -      V[x, t]*D[q[x, t], x];  eqo4 = D[p[x, t], t] +  D[p[x, t], x]*V[x, t] + g*D[q[x, t], x] == 0; 

The linearized system has a solution, but not a nonlinear one! There are many problems, including the order of the equation:

The spatial derivative order of the PDE may not exceed two. 

I try to remove the third order derivative from the nonlinear part:

The maximum derivative order of the nonlinear PDE coefficients for \ the Finite Element Method is larger than 1. It may help to rewrite \ the PDE in inactive form. 

Why does he even use only FEM now? Why was a nonlinear system of two equations solved normally? What should I do, help me, please.

How to compute (partial) consequence set for premises of the first order logic?

I am playing with the Sequent Calculus Trainer . It is game with judgments, where each judgment consists from: 1) formulas in the left hand side (premises); 2) consequence symbol; 3) formulas in the right hand side (consequences). The rules of the sequent calculus allow to establish that one judgment (system of premises and consequences) is equal to (derivable from) the other judgment. That is fine.

But my question is: how we can compute the right hand side (consequences) from the left hand side (judgments), how can we compute the set of consequences from the axioms and facts (partial assignment of variables)? What kind of operation it is and what type of computer software is used for calculating it? Is it some kind of computer algebra software?

Of course, the consequence set can be infinite, but for applications of the logic in the argumentation it is sufficient to compute partial consequence set and also use some tricks (relational reinforcement learning) to determine in which direction to explore the consequence set.