What is the exact time complexity of randomized Kuhn’s algorithm?

Please, read the whole question before answering, the exact details of the implementation are important.

Suppose that you want to find largest cardinality bipartite matching in bipartite graph with $ V = L + R$ vertices ($ L$ is the number of vertices in the left-hand side and $ R$ is the number of the vertices in the right-hand side) and $ E$ edges. You may assume that graph is connected, therefore $ E \geqslant V – 1$ .

Vertices in the left-hand side are numbered with integers from range $ [0, L)$ . Similarly, vertices in the right-hand side are numbered with integers from range $ [0, R)$ . Then, the classic implementation of Kuhn’s bipartite matching algorithm looks like this:

bool dfs_Kuhn (v, neigh, used, left_match, right_match):     if used[v]         return v     used[v] = true      for dest in neigh[v]         if right_match[dest] == -1 || dfs(dest, neigh, used, left_match, right_match)             left_match[v] = dest             right_match[dest] = v             return true      return false  int bipartite_matching_size (neigh):     left_match = [-1 repeated L times]     right_match = [-1 repeated R times]      for i in [0, L)         used = [false repeated L times]         dfs_Kuhn(i, neigh, used, left_match, right_match)      return L - (number of occurences of -1 in left_match) 

This implementation works in $ O(VE)$ time, moreover the bound is tight more or less independently of relations between $ V$ and $ E$ . In other words, the bound is tight for sparse graphs ($ E = O(V)$ ), for dense graphs ($ E = \Omega(V^2)$ ) and for everything in-between.

There is an implementation that works much faster in practice. The $ \texttt{dfs_Kuhn}$ function does not change, but $ \texttt{bipartite_matching_size}$ changes:

int bipartite_matching_size_fast (neigh):     left_match = [-1 repeated L times]     right_match = [-1 repeated R times]      shuffle(neigh)     for row in neigh         shuffle(row)      while true         used = [false repeated L times]         found_path = false          for i in [0, L)             if left_match[i] == -1                 found_path |= dfs_Kuhn(i, neigh, used, left_match, right_match)           if !found_path             break      return L - (number of occurences of -1 in left_match) 

Of course, upper bound of $ O(VE)$ can be proven for the faster version as well. Lower bounds are completely different story, though.

We used two optimizations:

  1. The block of code inside $ \texttt{while true}$ works in total $ O(E)$ time, but often finds several disjoint augmenting paths, instead of at most one, as did the block inside $ \texttt{for in in [0, L)}$ in the original code.

  2. The order of vertices in the left-hand side and the order in which the for-loop $ \texttt{for dest in neigh[v]}$ considers their neighbours are now random.

If only the first of these two optimisations is used, there are some relatively well-known degenerate cases when the code still takes $ \Omega(VE)$ time. However, almost all such cases that I know abuse specific ordering of neighbours of the left-hand side vertices, so the $ \texttt{dfs_Kuhn}$ function is forced to repeatedly go along some fixed very long path and “flip it”. Therefore, they fall apart when the second optimisation is added.

The only semi-strong test I know is a dense ($ E = \Theta(V^2)$ ) graph, in which the fast version of Kuhn’s algorithm takes $ \Omega(V^3 / \log V)$ time. However, all my attempts to generalise that construction to sparse graphs (the case I am most interested in) were unsuccesful.

So, I want to ask the following question. Is something known about runtime of this fast version of Kuhn’s algorithm on sparse graphs? Any nontrivial lower bounds (better than $ \Theta(E \cdot \log V)$ )? Maybe some upper bounds (one my friend believes that this algorithm always runs in $ O(E \sqrt{E})$ time, which seems to be the case on random bipartite graphs)?

No exact match was found when add user to SP2013 site

I am working on a SharePoint 2013 on-premise farm. We need to add a user “Subdomain\peterpan” to site collection A’s permission group. But it returns “No exact match was found” error.

We checked in Central Admin -> Manage Profile Service: User Profile Service Application -> Manage User Profiles. The userID is found. Also we can add another user “Subdomain\alice” to that site collection A. In addition, we can add “Subdomain\peterpan” to another site collection B.

In the form assigning user permission, when I type in “Subdomain\” the dropdown list will show me “Subdomain\peterpan” (nothing else, only peterpan). When select it and click “Share” button, system return “user does not exist or is not unique” error.

In the user profile sync, we have setup to sync the DOMAIN.com. “Subdomain” is a child domain of DOMAIN.com.

What could cause the No exact match was found error?

Exact meaning of $2^{\mathcal{O}(f(n))}$

In Sipser’s Introduction to the Theory of Computation he uses the notation $ 2^{\mathcal{O}(f(n))}$ to denote some asymptotic running time.

For example he says that the running time of a single-tape deterministic turing machine simulating a multi-tape non-deterministic turing machine is

$ \mathcal{O}(t(n)b^{t(n)})=2^{\mathcal{O}(t(n))}$ where $ b$ is the maximal number of options in the transtition function.

I was wondering if someone can clarify the exact definition of this for me:

My assumption is that if $ g(n)=2^{\mathcal{O}(f(n))}$ then there exists $ N \in \mathbb{Z}$ and $ c \in \mathbb{R}$ s.t.

$ g(n) \leq 2^{cf(n)}=2^{f(n)^c}$ for all $ n>N$ .

Thank you

Confusing with the exact location of a shellcode in memory?

When we test a shellcode,we add it to a small C program and execute it in order to see it does the actual job we expect. But most of the time, it crashes saying ‘segmentation fault’. I got the same issue when I execute it on my linux machine. Here what I got from an article. It happened because the .code section is in read-only memory.So the program should copy to the stack itself before execution. Then I compiled it adding -fno-stack-protector and -z execstack.It worked perfectly.

So what is actually happening in memory when we execute a shellcode ??

How to see tables and column names in exact case which is created and not by default uppercase?

In toad, whenever I create an object such as a table, If I name columns in PascalCase when it created if I open the table, then toad displays all the column names in uppercase.

Is there any option in toad which prevents this default behavior and let us see the object names in the exact case which we create them?