Greedy Probabilistic Algorithm for Exact Three Cover

I have a probabilistic greedy algorithm for Exact Three Cover. I doubt it’ll work on all inputs in polytime. Because the algorithm does not run $ 2^n$ time. I will assume that it works for some but not all inputs.

Our inputs are $ S$ and $ B$

$ S$ is just a set of integers

$ B$ is a list of 3-element {}s

Algorithm

  1. Input validation functions are used to ensure that sets of 3 are $ elements$ $ S$ .

  2. A simple $ if~~statement$ makes sure that $ |S|$ % $ 3$ = $ 0$

  3. I treat the sets like lists in my algorithm. So, I will sort all my sets from smallest to largest magnitudes (eg {3,2,1} now will be sorted to {1,2,3}

  4. I will also sort my list of sets called $ B$ in an ordering where I can find all {1,2,x}s with all other {1,2,x}s. (eg, sorted list {1,2,3},{1,2,4},{4,5,6},{4,5,9},{7,6,5} )

  5. I will also genereate a new list of sets containing elements where a {1,2,x} only occurs one time in $ B$ .

  6. Use brute force on small inputs and on both sides of the list $ B$ up to $ |S|$ / $ 3$ * $ 2$ sets. (eg. use brute force to check for exact covers on left and right side of the list B[0:length(s)//3*2] and reversed B[0:length(s)//3*2])

Seed the PRNG with a Quantum Random Number Generator

for a in range(0, length(B)):     o = quantumrandom.randint(0, length(B))     random.seed(int(o))  # I will create a function to shuffle B later  def shuff(B, n):     for i in range(n-1,0,-1):         random.seed()         j = random.randint(0,i+1)         B[i],B[j] = B[j],B[i] 

Define the number of times while loop will run

n = length(s)  # This is a large constant. No instances # are impractical to solve.  while_loop_steps = n*241*((n*241)-1)*((n*241)-2)//6 

While loop

stop = 0 Potential_solution = [] opps = 0 failed_lists = 0 ss = s  while stop <= while_loop_steps:      opps = opps + 1     stop = stop + 1      shuff(B,length(B))          if length(Potential_solution) == length(ss)//3:         # break if Exact         # three cover is         # found.         OUTPUT YES         failed_lists = failed_lists + 1         HALT      # opps helps     # me see     # if I miss a correct     # list           if opps > length(B):         if failed_lists < 1:             s = set()             opps = 0               # Keep B[0]     # and append to     # end of list     # del B[0]     # to push >>     # in list.       B.append(B[0])     del [B[0]]     Potential_solution = []     s = set()          for l in B:         if not any(v in s for v in l):             Potential_solution.append(l)             s.update(l) 

Run a second while loop for new_list if Step 5 meets the condition of there being only ONE {1,2,x}s )eg. {7,6,5} shown in step 4

Two Questions

How expensive would my algorithm be as an approximation for Three Cover?

And, what is the probability that my algorithm fails to find an Exact Three Cover when one exists?

Probabilistic Turing machine – Probability that the head has moved k steps to the right on the work tape

I have a PTM with following transition:

$ \delta(Z_0, \square , 0) = \delta(Z_0, \square , L, R)$ ,

$ \delta(Z_0, \square , 1) = \delta(Z_0, \square , R, R)$

Suppose that this PTM executes n steps. What is the probability that the head has moved k steps to the right on the work tape (in total, i.e., k is the difference between moves to the right and moves to the left) ?

Probability of terminating in a state in a probabilistic algorithm

Suppose i have a circular array of $ n$ elements. At time $ t=0$ i am in position 0 of the array. The algorithm moves left or right with probability $ p=1/2$ (since the array is circular when it moves left from 0 it goes to position $ n$ ). When i visited all the positions at least once the algorithm returns the position it terminated into.
Launching the algorithms many times shows that it stops with the same probability in any of the n positions (except for zero obviously). How do i demonstrate that the probability of ending up in each state is uniform?
My understanding is that this process is explained by a doubly stochastic Markov chain, but is there a theorem i can quote about this specific case?

Spaced-bounded Probabilistic Turing Machine Always Halts

For example, in the definition of BPL, we require that the probabilistic Turing machine has to halt for every input and every randomness. What is the reason for us to define them this way? What would happen if they don’t halt? On the other hand, we don’t require space-bounded non-deterministic Turing machines to halt.

Grover’s algorithm on probabilistic classical machines

My understanding is that polynomial-time quantum algorithms can be simulated by classical polynomial-time probabilistic algorithms, by simulating the quantum gates individually.

However, implementing an algorithm such as Grover’s with that construction is some sense unsatisfyingly “unnatural” – we have to set up a radically different computation environment and do the calculations inside, rather than directly expressing what we want in classical operations.

Has anyone constructed an algorithm with the same characteristics as Grover’s (i.e. search a list of $ N$ for a marked item in $ O(\sqrt N)$ time with success probability $ \geq 0.5$ ) more directly in classical probabilistic manipulations? Obviously it being “more natural” does not imply anything must exist, but it seems counterintuitive for this to only be possible by the roundabout method of building a quantum environment to do the calculation in.

Modeling a set of probabilistic concurrent processes

I’m looking into discrete-time Markov chains (DTMCs) for use in analyzing a probabilistic consensus protocol. One basic thing I haven’t been able to figure out is how to model a set of independent processes: consider $ N$ processes. These processes will concurrently execute a series of identical instructions labeled $ 0, 1, 2, 3,$ etc. and all are starting in instruction $ 0$ . When probability is not involved, modeling this is simple: it’s a state machine which branches nondeterministically off the start state to $ N$ different states, where in each of those $ N$ states a different process was the first to execute instruction $ 0$ . What do we do when probability is involved? Do we do the same thing with $ N$ states branching from the start state, where the probability of transitioning to each state is $ \frac{1}{N}$ ? As in, it’s uniformly random which process was the first one to execute instruction $ 0$ ?

Is this like taking the product of the state machines of each process?

I’m using a DTMC here, would I gain anything by moving to a CTMC if I don’t care about anything beyond the global order of execution?

Bonus question: assigning probabilities to whichever action (process executing an instruction) is taken first seems like a generalization of the non-probabilistic notion of fairness; if it is, what is the formal definition of this generalized notion of probabilistic fairness?

Perfect Probabilistic Encryption still requires key length about as long as message

Let $ (E,D)$ be a probabilistic encryption scheme with $ n$ -length keys (given a key $ k$ , we denote the corresponding encryption function by $ E_k$ ) and $ n+10$ -length messages. Then, show that there exist two messages $ x_0, x_1 \in \{0,1\}^{n+10}$ and a function $ A$ such that

$ Pr_{b \in \{0,1\}, k \in \{0,1\}^n}[A(E_k(x_b)) = b ] \geq 9/10$

(This is problem 9.4 from Arora/Barak Computational Complexity)

My gut intuition says that the same idea from the proof in the deterministic case should carry over. WLOG let $ x_0 = 0^{n+10}$ , and denote by $ S$ the support of $ E_{U_n}(0^{n+10})$ . We will take $ A$ to output $ 0$ if the input is in $ S$ . Then, assuming the condition stated in the problem fails to hold for all $ x \in \{0,1\}^{n+10}$ , we conclude that $ Pr[E_{U_n}(x) \in S] \geq 2/10$ for all $ x$ . This implies that there exists some key so that $ E_k$ maps at least $ 2/10$ of the $ x$ into $ S$ (the analogue of this statement in the deterministic case suffices to derive a contradiction), but now I don’t really see how to continue. Is my choice of $ A$ here correct, or should I be using a different approach?

Are there efficient probabilistic multiplication algorithms that use O(n log n) gates?

Recently Harvey and Hoeven published a paper proving that integer multiplication can be performed using at most O(n log n) operations. This algorithm is theoretically interesting, but in practice completely silly because you only start to see advantages on numbers with an absurd number of digits.

But suppose that we only wanted a probabilistic multiplication circuit, which returned the wrong result with probability at most epsilon. Then maybe certain shortcuts could be taken, in order to avoid the most inconvenient parts of multiplying two numbers.

For a fixed acceptable failure rate epsilon, is there an O(n log n) multiplication algorithm that achieves this failure rate without being horrendously inefficient in practice?

Probabilistic halting problem

I’m a physics and math student working through Nielsen & Chuang’s text on quantum computation and information. I don’t have much experience in CS theory, so some of these exercises are confusing to me:

Suppose we number the probabilistic Turing machines and define the probabilistic halting function $ h_p(x)$ to be 1 if machine $ x$ halts on input of $ x$ with probability at least 1/2 and 0 if machine $ x$ halts on input of $ x$ with probability less than 1/2. Show that there is no probabilistic Turing machine which can output $ h_p(x)$ with probability of correctness strictly greater than 1/2 for all $ x$ .

I’m sure we’re supposed to assume that there is a probabilistic Turing machine that does this, and then show that we can solve the deterministic halting problem using this machine, thus obtaining a contradiction. I don’t have a clue how to go from this probabilistic Turing machine to a deterministic one, though.