Why is a threshold determined for Byzantine Fault Tolerance of an “Asynchronous” network? (where it cannot tolerate even one faulty node)

In following answer (LINK: https://bitcoin.stackexchange.com/a/58908/41513), it has been shown that for Asynchronous Byzantine Agreement:

“we cannot tolerate 1/3 or more of the nodes being dishonest or we lose either safety or liveness.”

For this proof, the following conditions/requirements has been considered:

  1. Our system is asynchronous.
  2. Some participants may be malicious.
  3. We want safety.
  4. We want liveness.

A fundamental question is that:

With considering the well-known paper titled: “Impossibility of Distributed Consensus with One Faulty Process” (LINK: https://apps.dtic.mil/dtic/tr/fulltext/u2/a132503.pdf)

showing that:

no completely asynchronous consensus protocol can tolerate even a single unannounced process death,

Can we still assume that the network is asynchronous ? As in that case the network cannot tolerate even one faulty node.

Is it possible to keep weights of left and right subtree at each node of BST that has duplicate values?


Is it possible to keep weights of left and right subtree at each node of BST that has duplicate values?

I must be able to delete a node completely(irrespective of how many times it is present)

Currently, in my code, I am keeping count variable in each node that records the number of times it is present in tree.

During insertion, I can increase the size of left and right subtree weight at each node according to if my value is less or more. but how do I adjust the weights when I delete a node(because I may delete a node with count >1)

Secure a Jenkins node to only run approved scripts?

We have a series of Jenkins nodes that are used to deploy changes onto our SQL Servers, which works fine as long as everyone behaves and can be trusted.

The worry is that a rogue developer or hacker could simply add something like this into a Jenkins file and trash our data or performance:

   node (production) {          stage ‘deploy_straight_to_prod’ {                …<do something bad here>         }     } 

How do we protect against this? Ideally, only scripts that have been actively aproved by a DBA should be allowed.

Determine whether there exists a path in a directed acyclical graph that reaches all nodes without revisiting a node

For this I came up with a DFS recursion.

Do DFS from any node and keep doing it until all nodes are Exhausted. I.E. pick the next unvisited node once you cant keep recursing.

The element with the highest post number or the last element you visit should be the first element in your topological ordering.

Now do another DFS recursion that executes on every node called DFS_find:

DFS_find(Node): if (node has no neighbors): return 1; otherwise: return 1 + the maximum of DFS_find(Node) for all neighboring nodes

Execute DFS_find(Node) on the first node in your topological ordering. If it returns a number equal to the number of vertices, then a directed path that crosses every node once, exists. Otherwise it does not.

How can I prove whether or not this algorithm is correct?

I think this may be a little less time efficient than the classical way to just do a topological sort and then check if each consecutive pair has an edge between them.

Node depth in randomly built binary search tree

It can be proved that randomly built binary search trees of size $ n$ are of depth $ O(log(n))$ and it is clear that level $ k$ has at most $ 2^{k}$ nodes (root’s level is 0).

I have an algorithm that traverses every path in the tree (beginning at the root) but stops after traversing $ k$ nodes (the parameter $ k$ is a configurable parameter which is independent of the size of the tree $ n$ ).

For any tree $ T$ with $ depth(T) > k$ , the algorithm will miss some nodes of the tree. I would like to bound the probability of my algorithm missing a large number of nodes.

Formalizing it: let $ T$ be a randomly built binary search tree of $ n$ nodes. I would like to calculate the probability for a node to have depth larger than $ k$ , as a function of $ n$ and $ k$ .

Minimum number of edges to remove to disconnect two node sets $A$ and $B$ in a directed graph

We have directed graph $ G$ (not necessarily a DAG), two disjoint sets $ A$ , $ B$ , of vertices.
I need to plan an algorithm returning the minimum number of edges that need to be removed, such that there will be no path from any node in $ A$ , to any node in $ B$ and vice versa.

I had the idea using max flow min-cut to find the minimum number of edges needing to be removed such that there won’t be a path from $ A$ to $ B$ , and then using the algorithm again on $ B$ (so there won’t be a path to $ A$ ).
The problem is that the sum of these minimum number of edges isn’t necessarily the “global” minimum.

Does there even exist such an algorithm running in polynomial time?

Decision tree: how to decide the next node?

data set

I have to decide for which value of “Klasse”

How do i do it?

I know that I have to decide on maximum information gain

So first off I’ve calculated the entropy of “Klasse”

That is E(Klasse)= -(3/11*log(3/11)+3/11*log(3/11)+5/11*log(5/11)) = 1.067

So how do I proceed from that?

I now need to find the first decision node, yes?

And how do I proceed if I’ve found a decision node ?

Thanks