Transitive reduction with vertex additions?

The transitive reduction of a (finite) directed graph is a graph with the same vertex set and reachability relation and a minimum number of edges. However, what if vertex additions are allowed? In some cases, the addition of vertices can considerably reduce the number of edges required. For example, a complete bipartite digraph $ K_{a,b}$ has $ a + b$ veritices and $ ab$ edges, but the addition of a single vertex in the middle results in a digraph with the same reachability relation that has $ a + b + 1$ vertices and only $ a + b$ edges.

More formally, given a directed graph $ G = (V, E)$ , the challenge is to find $ G’ = (V’, E’)$ and injective function $ f: V \rightarrow V’$ such that $ f(b)$ is reachable from $ f(a)$ in $ G’$ if and only if $ b$ is reachable from $ a$ in $ G$ and such that $ |E’|$ is minimized.

Are there any known results or algorithms related to this problem?

Proving undecidability of HALT_tm by reduction

Sipser in his book introduction to the theory of computation provided a proof of undecidability of $ HALT_{TM}$ . He uses a contradiction, he assumed that $ HALT_{TM}$ is decidable, and built a decider for $ A_{TM}$ , and since $ A_{TM}$ is already proved by digonalization method to be undecidable, thus the contradiction occurs and $ HALT_{TM}$ is undecidable. The Turing Machine is simple and straightforward and I’m not going to talk about its details.

What makes me confused is this sentence

We prove the undecidability of $ HALT_{TM}$ by a reduction from $ A_{TM}$ to $ HALT_{TM}$

and I would like to know in which part of the proof reduction actually occurs?

From what we know of the concept of reduction, reducing $ A$ to $ B$ means: we have two problems $ A$ and $ B$ and we know how to solve $ B$ but we stuck in $ A$ , then if we reduce $ A$ to $ B$ , it means solving an instance of $ B$ will cause solving an instance of $ A$ .

let’s get back to the proof, Sipser says

We prove the undecidability of $ HALT_{TM}$ by a reduction from $ A_{TM}$ to $ HALT_{TM}$

thus $ A = A_{TM}$ and $ B = HALT_{TM}$ . We don’t know how to solve $ HALT_{TM}$ and actually this is the problem in hand, furthermore the strategy of the proof is based on contradiction, something that is completely irrelevant to the concept of reduction. Then why Sipser uses the term reduction in this proof?

Is this a computable function? Is the reduction correct?

Let $ A$ be a set, $ K=\{x:\phi_x(x)\downarrow\}$ . Let c to be a total computable function such that $ \phi_{c(x,y,n)}(z)=\begin{cases}\phi_n(z) & \text{if }\phi_x(y)\downarrow\\uparrow &\text{otherwise}\end{cases}$

Suppose $ \forall x,y\exists a.\phi_x(y)\downarrow \Leftrightarrow c(x, y,a)\in A$ .

The question is if the function: $ f(x)=a$ such that $ x\in K \Leftrightarrow c(x, x, a)\in A$ is total computable.

Hence, can I prove $ K\leq _m A$ with $ c(x,x, f(x))$ as reduction function?

Lambda Expression Reduction

I am unable to solve the following lambda expression using both normal order (Call-by-name) and applicative order (Call-by-value) reduction. I keep getting different answers for both. This is the lambda expression that has to be reduced using both techniques:

(λfx.f (f x)) (λfx.f (f x)) f x

How to prove re-hard or co-re hard by reduction?

My question asks to show that $ FINITE_{TM}$ is r.e. hard and co.re hard by reduction. My first idea is that we can show $ HALT_{TM}$ that’s reducible to it. So, my assumption is to prove $ FINITE_{TM}$ is r.e. hard, we can reduce $ HALT_{TM}$ to $ FINITE_{TM}$ . And to show $ FINITE_{TM}$ is co-r.e. hard, we can reduce $ \overline{HALT}_{TM}$ to $ FINITE_{TM}$ . Does my assumption sound accurate? Any suggestions?

Standardisation Theorem versus Leftmost reduction Theorem

According to Chris Hankin in his book (Lambda Calculus a Guide for Computer Scientists). A reduction sequence $ \sigma: M_0 \to^{\Delta_0} M_1 \to^{\Delta_1}M_2 \to^{\Delta_2}\ldots $ is a standard reduction if for any pair $ (\Delta_i, \Delta_{i+1})$ , $ \Delta_{i+1}$ is not a residual of a redex to left of $ \Delta_{i}$ relative to the given reduction from $ M_{i}$ to $ M_{i}$ . The Standardization Theorem says that if a term $ A$ $ \beta$ -reduces to $ B$ , then there is a standard path from $ A$ to $ B$ . On the other hand, in the leftmost strategy, outermost redex is always reduced first. The Leftmost Reduction Theorem says that if a term $ A$ $ \beta$ -reduces to $ B$ , and $ B$ is in $ \beta$ -normal form, then the leftmost strategy reaches $ B$ . My question is:

1- I’m not seeing the difference between the reductions Standardization and Leftmost, for me, it seems to say the same thing.

2- Why the Leftmost Reduction Theorem is a particular case of the standardization Theorem?

D&D3.5e — Damage Reduction — DR/Unholy vs DR/Evil

We’ve got a party with a Celestial Mystic (Book of Exalted Deeds), and I was reading about his level 9 ability, which grants him DR 10/Unholy. Would this be different from DR/evil, as in it would only be breached by unholy weapons, and not attacks from evil outsiders?

I’m asking this because I haven’t seen any monsters or anything that have things like DR/Unholy, Holy, Axiomatic, etc… it’s usually just Good/Evil/Lawful/Chaotic. Has anyone seen any material throughout the books like this? My intuition tells me it’s a typo and should be DR 10/evil, but the errata doesn’t say anything about this.

Thanks!

Damage Reduction — DR/evil vs DR/unholy [closed]

we’ve got a party with a Celestial Mystic (Book of Exalted Deeds), and I was reading about his level 9 ability, which grants him DR 10/Unholy. I got confused because it’ll usually be DR/good or DR/evil.

Would DR/unholy be different from DR/evil, as in it would only be breached by unholy weapons, and not attacks from evil outsiders? Or is this a typo of some sort and it should be DR 10/evil?

Thanks!