Transitive reduction with vertex additions?

The transitive reduction of a (finite) directed graph is a graph with the same vertex set and reachability relation and a minimum number of edges. However, what if vertex additions are allowed? In some cases, the addition of vertices can considerably reduce the number of edges required. For example, a complete bipartite digraph $ K_{a,b}$ has $ a + b$ veritices and $ ab$ edges, but the addition of a single vertex in the middle results in a digraph with the same reachability relation that has $ a + b + 1$ vertices and only $ a + b$ edges.

More formally, given a directed graph $ G = (V, E)$ , the challenge is to find $ G’ = (V’, E’)$ and injective function $ f: V \rightarrow V’$ such that $ f(b)$ is reachable from $ f(a)$ in $ G’$ if and only if $ b$ is reachable from $ a$ in $ G$ and such that $ |E’|$ is minimized.

Are there any known results or algorithms related to this problem?

Like transitive reduction, but removing vertices rather than edges?

Suppose I have a directed graph $ G = (V, E)$ (or, which is the same, a relation on the set $ V$ as defined by the adjacency matrix) that may contain three vertices $ x, y, z$ such that $ xy, xz, yz \in E$ — that is to say, the relation restricted to $ x, y, z$ is transitive, there is a triangle. Let us call this situation a “local transitivity”. My goal is to obtain all the subgraphs of $ G$ induced by cutting middle vertices from local transitivities until none remain, which I dub “resolution”.

There may be multiple solutions. For instance, consider a graph given by this relation:

  a b c d   a □ ■ ■ □   b □ □ ■ ■   c □ □ □ ■   d □ □ □ □   

(It looks like a square with one diagonal.)

There are two ways it can be resolved:

  a b d        a c d   a □ ■ □      a □ ■ □   b □ □ ■      c □ □ ■   d □ □ □      d □ □ □   

One way I can compute the resolutions of a graph is by giving a “non-deterministic” function $ f :G \rightarrow \{G\}$ that removes any single local transitivity. Repeatedly applying $ f$ to any graph, I will eventually converge to a set of induced subgraphs completely devoid of local transitivities.

One way to define $ f$ is by considering all the triples of distinct vertices and checking them for local transitivity. From each matching triple, the middle vertex is extracted, and any of these vertices is cut. But there is about $ |V|^3$ such triples.

Is there a better way? Is there prior art that I may study?

efficiently find connected components in undirected graph considering transitive equivalence

I have a set of nodes and a function foo(u,v) that can determine whether two nodes are equal. By “equal” I mean transitive equivalence: If 1==2 and 2==3 then 1==3 and also: If 1==2 and 1!=4 then 2!=4

When given a set of nodes I can find all connected components in the graph by passing every possible combination of nodes to foo(u,v) function and building the needed edges. Like this:

import networkx as nx import itertools from matplotlib import pyplot as plt  EQUAL_EDGES = {(1, 2), (1, 3), (4, 5)}   def foo(u, v):     # this function is simplified, in reality it will do a complex calculation to determine whether nodes are equal.     return (u, v) in EQUAL_EDGES   def main():     g = nx.Graph()     g.add_nodes_from(range(1, 5 + 1))     for u, v in itertools.combinations(g.nodes, 2):         are_equal = foo(u, v)         print '{u}{sign}{v}'.format(u=u, v=v, sign='==' if are_equal else '!=')         if are_equal:             g.add_edge(u, v)      conn_comps = nx.connected_components(g)     nx.draw(g, with_labels=True)     plt.show()     return conn_comps   if __name__ == '__main__':     main() 

the problem with this approach is that I get many redundant checks that I would like to avoid:

1==2  # ok 1==3  # ok 1!=4  # ok 1!=5  # ok 2!=3  # redundant check, if 1==2 and 1==3 then 2==3  2!=4  # redundant check, if 1!=4 and 1==2 then 2!=4  2!=5  # redundant check, if 1!=5 and 1==2 then 2!=5 3!=4  # redundant check, if 1!=4 and 1==3 then 3!=4 3!=5  # redundant check, if 1!=5 and 1==3 then 3!=5 4==5  # ok 

I want to avoid running in O(n^2) time complexity. What is the correct way (or maybe an existing function in any python library) to efficiently find all connected components by a custom function?

A vertex transitive graph has a near perfect/ matching missing an independent set of vertices

Consider a power of cycle graph $ C_n^k$ , represented as a Cayley graph with generating set $ \{1,2,\ldots, k,n-k,\ldots,n-1\}$ on the Group $ \mathbb{Z}_n$ . Supposing I remove an independent set of vertices of the form $ \{i,i+k+1,\ldots,\lfloor\frac{n}{k+1}\rfloor+i\}$ or a single vertex. Then, is it possible to obtain a perfect/ near perfect matching when I remove the independent set of vertices always? If not, then is it possible in case the graph is an even power of cycle?

I hope yes, as we can pair the vertices between any two independent sets of the above form or between the indpendent set and the single vertex to get a maximal matching which is near perfect(in case the order of induced subgraph is odd) or perfect(in case the order of the induced subgraph is even). Any counterexamples? Also, can we generalize this, if true, to any vertex transitive graph, that is, does there exist an indpendent set(non-singleton) of vertices, such that removing that set induces a perfect/near-perfect matching? Thanks beforehand.

Transitive Closure vs Reachability in Graphs

I am facing the most curious situation with [my current information of] transitive closure algorithms. Specifically, is what follows not an algorithm for finding the transitive closure of a graph G(V, E) and is the time complexity not O(|V|+|E|):

Use Kosaraju’s to compute SCCs (strongly connected components) — O(|V| + |E|)

Create a DAG from the SCCs as (super)nodes — O(|V| + |E|)

In a single supernode, reachability of all nodes is the same and equal to the union of the nodes in the containing supernode plus the reachable supernodes in the DAG — O(|V| + |E|) to compute reachability in the DAG and O(|V| * |V|) total to compute reachability sets for all nodes

These reachability sets do constitute the transitive closure of the graph wholly, right?
But there aren’t any O(|V| + |E|) transitive closure algorithms around.
What am I missing?

Minimum transitive dominating subtournament

For any positive integer $ k$ , does there exist a tournament such that the smallest dominating set, which also forms a transitive subtournament, has size exactly $ k$ ?

A tournament that does not work for $ k>2$ is one where the vertices are on a cycle and each vertex has an edge to $ (n-1)/2$ following vertices clockwise — in this case taking two opposite vertices already gives a dominating set which is also transitive, meaning $ k=2$ .

Prove that the transitive closure of a relation is transitive without using recursion

In Kunen’s book Set Theory (from 2013) the transitive closure of a relation $ R$ on $ A$ is defined as $ $ R^* = \{ (x,y) \in A^2 : \text{there is an $ R$ -path from $ x$ to $ y$ } \} $ $ where an $ R$ -path from $ x$ to $ y$ simply is a function $ s : n+1 \to A$ for some $ n \in \omega$ such that $ s(0) = x$ and $ s(n) = y$ and $ s(i) \,R\, s(i+1)$ for all $ i < n$ . This is Definition I.9.4 in the book.

Now it is claimed in Lemma I.9.5 that $ R^*$ is transitive, the proof being that this is “easily seen by combining paths”.

However, I cannot figure out how to combine two paths without appealing to the validity of recursive definitions for $ \omega$ . The way it is presented in the book somehow suggests that this should be possible. Indeed, the goal of the chapter really seems to be to prove the recursion principle for well-founded sets without having to prove recursion for $ \omega$ first.

Is there a way to combine paths without having to use recursion that I am overlooking?

I would also appreciate if someone could point me to another proof of the principle of well-founded recursion where it is not assumed to already know recursion for $ \omega$ .

Need help with mathematical induction on transitive relation problem.

Problem as follows:

Let R be a transitive relation. Let a$ R^n$ b for n $ \geqslant$ 1, mean that there is a sequence of tuples:

⟨a$ _0$ ,a$ _1$ ⟩,⟨a$ _1$ ,a$ _2$ ⟩, … , ⟨a$ _n$ $ _-$ $ _1$ ,a$ _n$

from R such that a$ _0$ = a and a$ _n$ = b. Prove the following: “If a$ R^n$ b, then a$ R$ b” for all natural numbers n $ \geqslant$ 1, by mathematical induction.

Now, I understand why this assertion would be true, it’s just proving it with mathematical induction that’s giving me problems. (Check for n = 1, assume n = k, prove n = k+1 etc.)