Optimal TSP Path with Branch and Bound

enter image description here

I just wanted a bit of clarification on the above picture. I understand the general idea of building out a tree in DFS order and stopping once you come across a number bigger than you got before. For example, when the value of the path becomes $ 14$ or $ inf$ the algorithm stops because there was already a path of value $ 11$ . But, I am quite confused with regards to where these numbers are coming from (the lower bounds on the cost). For example, the path from vertex $ A$ to vertex $ B$ has length $ 1$ , but in the tree, the path from $ A$ to $ B$ has a lower bound of $ 10$ .

So, I would greatly appreciate if anyone could let me know where the numbers in the branch-and-bound search tree are coming from!

Lower bound on number of (different) circuits of given size?

For circuits with $ n$ input bits, we know that, for any function $ s$ , there are at most $ O(s(n)^{s(n)}) = O(2^{s(n) \log s(n)})$ circuits with size at most $ s(n)$ (where the description is fixed beforehand).

Say two circuits $ C_1$ and $ C_2$ are different if the function they compute is different, that is, there is an $ n$ -bit string $ x$ such that $ C_1(x) \neq C_2(x)$ . The $ O(s(n)^{s(n)})$ bound above is an upper bound on the number of circuits of a given size. Is there a known lower bound on the number of different circuits with size at most $ s(n)$ ?

Clearly such a bound must be strictly smaller than the $ O(s(n)^{s(n)})$ bound since there are pairs of circuits with different structures (and even different number of gates) and which nevertheless compute the same function (i.e., they are not “different” as defined above)—but how smaller can it be?

Find the upper bound of the recurrence T(n) = T(n – 4) + n with n is odd

I am trying to solve this recurrence assuming n is odd: $ T(n) = T(n – 4) + \Theta n$

What I did so far was:

First, $ T(n – 4) = T(n – 8) + (n – 4) $ , thus we get $ T(n) = T(n – 8) + (n – 4) + n$

Next, $ T(n – 8) = T(n – 12) + (n – 8)$ , thus we get $ T(n) = T(n – 12) + (n – 8) + (n – 4) + n$

Next, $ T(n – 12) = T(n – 16) + (n – 12)$ , thus we get $ T(n) = T(n – 16) + (n – 12) + (n – 8) + (n – 4) + n$

So know I am seeing that where this is going, I wrote the general form:

$ T(n) = T(n – 4k) + [n – 4(k – 1)] + [n – 4(k – 2)] + … + (n – 4) + n$

As mentioned, we have n is odd so I write it as $ n = 2k + 1$ , in which we get $ k = \frac{n – 1}{2}$ . So I will plug that in to get:

$ T(n) = T(-n + 2) + (-n + 6) + (-n + 10) + … + (n – 4) + n$

This is where I am stuck. I suspect that the $ (n – 4) + n$ must have gone wrong – but I am not too sure but I decided to ask you guys.

Thank you so much!

Finding bound function of loop invariant

In Programming in the 1990s, by Edward Cohen, the author gives an example of a bound function.

For example, if we have $ B \equiv 10 – n$ where $ n = 0$ , say, then $ B$ will eventually be falsified if 10 – n is decreased. Thus we would need to choose for our bound function $ t = 10 – n$ . Decreasing t will eventually falsify $ B$ .

How did he find such bound function? Is it dependent on some variable, like this: $ t(n) = 10 – n$ ?

Is there a reason why JWT cannot be used as “connection bound context variable” for TLS?

Apart from the fact that TLS doesn’t have “connection context” variables, is there any reason why this couldn’t be technically built into the standard for something like JWT?

Why it is needed

When a browser sends multiple requests to the same API server, it resends the JWT in the header multiple times. This means additional data overhead, and each time, the server needs to verify the JWT.

I would like to see the TLS standard include “connection bound context variables” so that JWTs can be sent once and used across multiple HTTP REQUESTS over that TLS connection.

How it could work

1) The client connects to the server, and a TLS encrypted session is established.

2) The client send the JWT to the server-side – there are two options for this:

A) The client can send connection variables directly on the TLS layer. The client sets the variable on the client side, and TLS sends the data to the server side. The variable name from the client is always prefixed with “CLIENT-“.

B) The client sends a HTTP request over the TLS session to a specific endpoint dedicated for use to set a TLS variable /api/setJWT. The client sends the JWT as a POST. The server reads the JWT, verifies it, and then sets it to the TLS connection context.

[B] would work best, because client software would be harder to change (browsers).

3)

On the serverside, the encapsulating TLS object’s ITLSConnectionContext interface is made available from HTTP REQUEST handlers. The ITLSConnectionContext has three functions, string[] ListVariables(), string GetVariable(name), SetVariable(name, string).

Assuming 2B is used, when GetVariable(“JWT”) is successful, it can be assumed that it’s valid.

How it could work without changing TLS standard

If a particular TLS connection between the server and the client has a secure identity, then the application-layer can link that to a simple dictionary lookup. Perhaps this could be a GUID (as well as the client-side IPAddress-Port tuple). This GUID approach would mean each language framework can natively implement the lookup, and that makes other benefits possible, where native memory objects can also be linked, not just platform-independent strings.

Other benefits

Furthermore, the server might variables on the context for other purposes:

1) A database PK for the Person behind the JWT, PersonID. It might be a stream interface 2) A particular stream (TCP connection) to another resource 3) A file lock to a user-related logging file

The web would be better.


But am I wrong? Am I missing something that would make this unworkable and insecure?

Why is $O(|V| + |E|)$ the bound on DFS and BFS instead of just $O(|E|)$?

In one sense I understand why the bound on BFS and DFS is $ O(|V| + |E|)$ . Every vertex gets considered, hence the $ O(|V|)$ , and over the course of considering all adjacent vertices, we end up considering $ 2|E| = O(|E|)$ vertices total because $ \sum_{v \in V} d_v = 2|E|$ . What I don’t understand is why we even bother specifying $ O(|V| + |E|)$ in the first place because it seems to me that the tightest bound you can really have in any situation is $ O(|E|)$ . I.e., it seems to me that the $ O(|V|)$ doesn’t help you at all. To see why, consider two cases:

  • The graph is connected. Then of course $ |E| \ge |V – 1|$ .
  • The graph isn’t connected. Then you are “stuck” in the connected component you start in. Once again, inside the connected component $ A$ you start in you have $ |E_A| \ge |V_A – 1|$ . Maybe there are more edges and vertices elsewhere. But once again the $ O(|V|)$ seems redundant.

Another way I think about it: Adding a vertex alone doesn’t create any more work since the vertex will just be floating in space. Adding edges can create extra work.

So why/when is the $ O(|V|)$ useful?

Examples of Analysis of Branch and Bound Method

I am solving a graph problem, which can be formulated as an integer programme. Based on computer experiments, it seems that the branch and bound method works well. I would like to analyse the running time, and wonder whether there have been other problems where branch and bound method was used and the theoretical bounds on the running time has been established?

On another note, if anyone knows any examples of problems where the range of possible values that a variable in a linear programme can take, I’d also be interested in.

Upper bound on the average number of overlaps for an interval within a set of intervals

Let $ \mathcal{I}$ be a set of intervals with cardinality $ L$ , where each interval $ I_i \in \mathcal{I}$ is of the form $ [a_i, b_i]$ and $ a_i, b_i$ are pairwise distinct non-negative integers bounded by a constant $ C$ i.e. $ 0 \leq a_i < b_i \leq C$ . We say a pair of intervals $ I_i, I_j \in \mathcal{I}$ overlap if the length of overlap is $ > 0$ .

Define a function $ F(i)$ which computes the number of intervals in $ \mathcal{I} \backslash I_i$ that interval $ I_i$ overlaps with. \begin{equation} F(i) = \sum_{j=1, j \neq i}^{L} Overlap(I_i, I_j) \end{equation} where the function $ Overlap(I_i, I_j)$ is an indicator function which returns 1 if $ I_i, I_j$ overlap, else it returns 0.

The average number of overlaps for the intervals in $ \mathcal{I}$ , denoted by $ Avg(\mathcal{I})$ is given by $ Avg(\mathcal{I}) = \dfrac{\sum_{i=1}^{L}F(i)}{L}$ .

The question is, suppose we are allowed to choose the intervals in the set $ \mathcal{I}$ with the following additional conditions:

  1. For any $ t \in [0, C]$ , we have at most $ M$ (and $ M < L$ ) intervals in $ \mathcal{I}$ such that $ t$ is contained within those $ M$ intervals. Stated differently, at most $ M$ intervals overlap at any point in time.
  2. Any interval in $ \mathcal{I}$ overlaps with at most $ K$ other intervals, and $ M < K < L$ .

then, what is an upper bound on $ Avg(\mathcal{I})$ for any choice of the intervals in $ \mathcal{I}$ satisfying 1, 2?

In case you are wondering, this problem is of interest to me in order to be able to study the run-time of a scheduling algorithm.

I am unable to come up with a non-trivial upper bound for $ Avg(\mathcal{I})$ and would be interested to know if the problem I stated has been studied. I am also open to ideas on how one may be able to obtain a tight upper bound for $ Avg(\mathcal{I})$ .

Find expectation with Chernoff bound

We have a group of employees and their company will assign a prize to as many employees as possible by finding the ones probably better than the rest. The company assigned the same $ 2$ tasks to every employee and scored their results with $ 2$ values $ x, y$ both in $ [0, 1]$ . The company selects the best employees among the others, if there is no other employee with a better score in both tasks.

Knowing that both scores are uniformly distributed in $ [0, 1]$ , how can i proof that the number of the employees receiving the price is estimated near to $ \log n$ , with $ n$ the number of the employees, having high probability?

I need to use Chernoff bound to bound the probability, that the number of winning employees is higher than $ \log n$ .