If anything can be verified efficiently, must it be solvable efficiently on a Non-Deterministic machine?

Suppose, I wanted to verify the solution to $ 2$ ^$ 3$ . Which is $ 8$ .

The $ powers~of~2$ have only one 1-bit at the start of the binary-string.

Verify Solution Efficently

n = 8 N = 3 IF only ONE 1-bit at start of binary-string:   IF total_0-bits == N:    if n is a power_of_2:      OUTPUT solution verified, 2^3 == 8 

A solution will always be approximately $ 2$ ^$ N$ digits. Its not possible for even a non-deterministic machine to arrive to a solution with $ 2$ ^$ N$ digits faster than $ 2$ ^$ N$ time.

Question

Can this problem be solved efficently in non-deterministic poly-time? Why not if the solutions can be verified efficently?

Sort the given list efficiently [closed]

Consider you have a permutation of $ 1$ to $ n$ in an array $ s$ . Now select three distinct indices $ a$ ,$ b$ ,$ c$ ,need not to be sorted. Let $ s_a$ , $ s_b$ and $ s_c$ be the values at those indices and now you make a right shift to it ,that is $ new$ $ s_a$ = $ old$ $ s_b$ and $ new$ $ s_b$ = $ old$ $ s_c$ and $ new$ $ s_c$ =$ old$ $ s_a$ . Find the minimum number of operations required to sort the array or if is impossible how to determine it.

Example : Consider $ s= [2, 1, 3]$ ; consider indices $ (1,3,2)$ in the given order after applying one opeartion it is $ s =[1,2,3]$ .

I am thinking of applying a graph approach and solve it using graph traversal, am I right? Or you could explain your apporach.

How do I compare these ranges of numbers efficiently?

I’m looking for an efficient way of testing eights. What happens is I need to check if a value is eights and discard it.

The numbers I need to check for are:

8.8888, 88.888, 888.88, 8888.8, 88888.00                // 5 digits 88.8888, 888.888, 8888.88, 88888.8, 888888.00           // 6 digits 888.8888, 8888.888, 88888.88, 888888.8, 8888888.00      // 7 digits 8888.8888, 88888.888, 888888.88, 8888888.8, 88888888.00 // 8 digits 

However, these are actually represented in integer form. Which is the number multiplied by 10000.

So 8.8888 is represented as an int64 with the value 88888, and 888888.00 is represented as 888880000

There are quite a few values here to check. My simple approach was to just compare each one directly from a table. But then I thought perhaps I should maybe do something more efficient like masking and comparing each digit. But my crude approach did not work. It seems cumbersome and potentially a bit slow to convert to a string and compare eights that way. This code will run on an embedded system which checks these values many times over so I do need it to be reasonably performant. Note that I won’t have less than 5 digits represented or more than 8.

Proof that uniform circuit families can efficiently simulate a Turing Machine

Can someone explain (or provide a reference for) how to show that uniform circuit families can efficiently simulate Turing machines? I have only seen them discussed in terms of specific complexity classes (e.g., $ \mathbf{P}$ or $ \mathbf{NC}$ ). I would like to see how uniform circuit families is a strong enough model for universal, efficient computation.

Efficiently finding “unfounded edges” in a strongly connected graph

I’ve encountered a problem I need to solve concerning dependency graphs. I’ve simplified it as follows:

Consider a strongly connected graph G = (V,E).

  • We define a subset of vertices S ⊆ V as source vertices.
  • We call an edge e = (a,b) unfounded if there is no simple path from any source vertex to b that includes e. In other words, all paths from a source vertex that include edge e, must include vertex b at least twice.

The problem:

  • Find all unfounded edges in G.

There are some obvious ways to solve this inefficently (e.g. a depth-first traversal for each edge in G), but I was wondering if there was any O(|E|) approach. I’ve been struggling with this for a while and I keep “almost” solving it in linear time. Perhaps that’s impossible? I have no idea what the lower bound on efficiency is, and I was hoping some readers here could help me discover some efficient approaches.

Efficiently remove nodes from a connected graph

Suppose you have a connected graph and want to remove k nodes such that the result is still connected. How could you do this efficiently?

It occurs to me that you could find any spanning tree, say by a tree search of any kind. Identify all leaves in the spanning tree, all of these can be removed without disconnecting the remaining vertices. If you have more than k leaves then you’re done, but in any tree you’re only guaranteed 2 leaves. So you may need to reiterate the process until you’ve removed k vertices.

That implies O(k) runs of a tree search. Does a more efficient algorithm exist? I don’t think you can just look for articulation points or bridge edges because removing a single vertex may suddenly make other vertices which weren’t articulation points now turn into articulation points.

Calculating number of intersections of a horizontal line with line segments efficiently

I’m given an array $ A = [a_1, a_2, ….a_n] $ using which I construct $ n-1$ contiguous line segments by drawing a line from $ (i,a_i)$ to $ (i+1, a_{i+1})$ . Now, I’m given $ q$ queries in the form of $ x_1, x_2, y, l, r$ where $ l$ and $ r$ are the range for the array $ A$ and the rest indicate a horizontal line segment $ L$ from $ (x_1, y)$ to $ (x_2, y)$ . For each query, I want to find total intersections of $ L$ and the segments in the range $ l$ and $ r$ in $ O(n+q)$ or $ O(n + q\log{n})$ complexity.
I was able to arrive at a solution that works in $ O(nq)$ which simply traverses each range and calculates whether $ L$ intersects with the segments or not.
I believe, some pre-processing can be done on $ A$ which can reduce the complexity.
Any leads will be appreciated!

An algorithm which efficiently generates random samples without replacement, from a large range [0-N], N ~ 10^12?

I want an algorithm which generates random integers, without replacement, from a large range [0-N], N~10^12.

However, the whole range should not be stored in memory. The memory footprint should be O(1) relative to N. The algorithm can (probably must) retain state after every sample request.

The randomness should be “strong” in the cryptographic sense.

Efficiently finding the min-cost path of an AVL tree

It seems that in a full AVL tree, the left edge is always the minimum-cost path. For example, take the following full AVL tree:

full avl tree

The min-cost path would be 8-6-5. However this is not the case with other AVL trees. Take the previous tree with an additional 4 inserted:

avl tree

Here the min-cost path would be 8-6-7 rather than 8-6-5-4.

What is the most efficient algorithm to find the min-cost path in any AVL tree? Given the characteristics of AVL trees, is this algorithm faster than it would be for a standard BST?

Efficiently computing lower bounds over partially ordered sets

I have a list of sets that I would like to sort into a partial order based on the subset relation.

In fact, I do not require the complete ordering, only the lower bounds.

If I am not mistaken, each lower bound should define one separate component of the respective graph – and this component should be a meet-semilattice.

What would be the most convenient space and time efficient way to solve this problem? Perhaps there is a way that does not require to build the entire graph? Perhaps there is a known algorithm under a better terminology than what I have naively described above?

I am aware that the time and space requirements are underspecified above, but I would be happy about any suggestions, whether they are proven to be optimal or not…

Background: I am currently building an entire graph database that holds all the edges between the sets and then look for the nodes that have no generalizations, but this is quite complicated, slow and requires a lot of (disk) space. The list mentioned above contains roughly 100 million sets.