Distributional error probability of deterministic algorithm implies error probability of randomized algorithm?

Consider some problem $ P$ and let’s assume we sample the problem instance u.a.r. from some set $ I$ . Let $ p$ be a lower bound on the distributional error of a deterministic algorithm on $ I$ , i.e., every deterministic algorithm fails on at least a $ p$ -fraction of $ I$ .

Does this also imply that every randomized algorithm $ \mathcal{R}$ must fail with probability $ p$ if, again, we sample the inputs u.a.r. from $ I$ ?

My reasoning is as follows: Let $ R$ be the random variable representing the random bits used by the algorithm. \begin{align} \Pr[ \text{$ \mathcal{R}$ fails}] &= \sum_\rho \Pr[ \text{$ \mathcal{R}$ fails and $ R=\rho$ }] \ &= \sum_\rho \Pr[ \text{$ \mathcal{R}$ fails} \mid R=\rho] \Pr[ R=\rho ] \ &\ge p \sum_\rho \Pr[ R=\rho ] = p. \end{align} For the inequality, I used the fact that once we have fixed $ R = \rho$ , we effectively have a deterministic algorithm.

I can’t find the flaw in my reasoning, but I would be quite surprised if this implication is true indeed.

Doesn’t Linearizability implies Serializability?

Serializability is a concurrency scheme where the concurrent transaction is equivalent to one that executes the transactions serially.

In Linearizability, Once write completes, all later reads should return value of that write or value of later write. Once read returns particular value, all later reads should return that value or value of later write.

What is the main difference between Linearizability and Serializability?

NFA DFA Proof that for any regular Language so L in REG it implies that s(L) IN REG

$ $ \varphi_{1}(L)=\left\{w \in \Sigma^{*} | \text { there exists an } \alpha \in \Sigma^{*} \text { with }|\alpha|=|w| \text { and } \alpha w \in L\right\} $ $ $ $ \begin{array}{l}{\text { Proof. We want to show that } \varphi_{1}(L) \text { for all languages } L \in \operatorname{REG} \text { is regular. Be there } L \in \operatorname{REG} \text {random. Because } L \text { is regular }} \ {\text { there exists a DFA } M \text { with L}(M)=L \text { We now conctruct out } M \text { a } \lambda\text{NFA}{}\operatorname{with L}\left(M^{\prime}\right)=\varphi_{1}(L) . \text { For that we describe }} \ {\text { the working method } M^{\prime} \text { formal the condition space of } M^{\prime} \text { is (conceptional) those of } M . \text { The calculation }} \ {\text { of } M^{\prime} \text { functions (conceptional) as follows: instead of a stone to the beginning of the calculation places ond} q_{0} \text { , }}\end{array} $ $ $ $ \begin{array}{l}{\text { we place 3 stones (1 white, 1 red and 1 blue) on the states of} M . \text { We place the blue one }} \ {\text { on } q_{0} \text { , and the white on a not det. guessed } q_{i} \text { (both stones on the same condition). }} \ {\text { The calculation of } M^{\prime} \text { on Entry } w \in \Sigma^{*} \text { functions (conceptional) as follow: <Describe here how }} \ {\text { the 3 stones per read symbol are moved (they can keep on a state). Define }} \ {\text { additional when } M^{\prime} \text { a word is acccepted? }}\end{array}$ $

Could someone please help me with that? Researched regular Expressions DFA + NFA + Converting it but still don’t know how to solve that.

Equivalence from multi-tape to single-tape implies limited write space?

Suppose I have the following subroutine, to a more complex program, that uses spaces to the right of the tape:

$ A$ : “adds a $ at the beginning of the tape.”

So we have: $ $ \begin{array}{lc} \text{Before }: & 0101\sqcup\sqcup\sqcup\ldots\ \text{After }A: & $ 0101\sqcup\sqcup\ldots \end{array} $ $

And it is running on a multi-tape turing machine. The equivalence theorem from Sipser book proposes we can describe any multi-tape TM with a single-tape applying the following “algorithm”, append a $ \#$ at every tape and then concat every tape in a single tape, and then put a dot to simulate the header of the previous machine, etc, etc.

With $ a$ and $ b$ being the content of other tapes, we have: $ $ a\#\dot{0}101\#b $ $ If we want to apply $ A$ or another subroutine that uses more space, the “algorithm” described in Sipser is not enough, I can intuitively shift $ \#b$ to the right, but how to describe it more formally for any other subroutine that uses more space in the tape? I can’t figure out a more general “algorithm” to apply in this cases.

BosonSampling: $\# P \subseteq FBPP^{{NP}^{\mathcal{O}}}$ implies $P^{\#P}\subseteq BPP^{{NP}^{\mathcal{O}}}$

I am a complexity beginner, actually a quantum physicist.

In their famous BosonSampling paper, Aaronson and Arkhipov show amongst other things a polynomial time machine solving the problem of BosonSampling exactly, would result in the collapse of the Polynomial Hierarchy.

If you are not familiar with BosonSampling or any quantum physics at all, do not bother, it won’t be necessary for this question: I am interested in one particular aspect of the argument, that I do not understand because of my lack of background: It seems to have to do with the relation of function/search problems and corresponding(?) decision problems.

Specifically, on page 33, in the the proof of Theorem 1, they show that a #$ P$ -hard problem is also contained $ FBPP^{{NP}^{\mathcal{O}}}$ where $ \mathcal{O}$ is an oracle to some problem called BosonSampling. From there, they immediately seem to get that $ $ P^{\#P}\subseteq BPP^{{NP}^{\mathcal{O}}}$ $

My question is: How? I understand that there is a difference between counting/function problem complexity classes like $ \#P$ or $ FBPP$ and classes for decision problems. But how to relate the results? In particular, why is it $ P^{\#P}$ on the left?

Bounded treewidth implies bounded clique-width

We have a graph G of treewidth $ \operatorname{tw}(G)\leq k$ , for some $ k\in\mathbb{N}$ . I’ve seen a claim that that implies that the clique-width of the same graph is at most $ k \cdot 2^k$ . This implies that given a tree decomposition of the graph we can construct a proper expression tree using at most $ k \cdot 2^k$ labels.

I’m guessing that the $ k$ in the decomposition allows us to “construct” a node of the tree decomposition, i.e. regardless how the vertices in one bag are connected, we can easily connect them since we have $ k+1$ -ish labels at our disposal. However, I don’t quite see how we can use that fact to construct the complete expression tree (and therefore the $ k\cdot 2^k$ -expression).

How can we construct the proper expression and therefore prove that graphs of bounded treewidth are a subclass of graphs of bounded clique-width

Proving that the failure of algorithm W implies that the program is not typable

How one does prove that if algorithm W failed for a given program $ e$ and context $ \Gamma$ , then there is no substitution $ S$ and type $ \tau$ such that $ S\Gamma \vdash e : \tau$ ?

The original paper states that from the completeness proof one can derive that “it is decidable whether $ e$ has any type at all under the assumptions $ \Gamma$ “. However, I didn’t found this proof in the literature.

The algorithm W has some failures cases: the unification algorithm failed, an identifier was not found in the context, a recursive call failed, etc.

I more interested in the hard cases, the easy ones I can do myself.

One hard case seems to be the failure of the unification. In this case we know about the soundness and completeness of both recursive calls and, also, the non-existence of a unifier for $ S_2\tau_1$ and $ \tau_2\rightarrow \alpha$ . How those informations can be used to prove $ \neg \exists \tau \:S, S\Gamma \vdash e_1 \: e_2 : \tau $ ?

This part of algorithm W may be relevant here:

$ W(\Gamma, e_1\: e_2)$ =

$ (\tau_1,S_1) \leftarrow W(\Gamma, e_1)$

$ (\tau_2,S_2) \leftarrow W(S_1\Gamma, e_2)$

$ S \leftarrow unify(S_2\tau_1, \tau_2\rightarrow \alpha)$ where $ \alpha$ is fresh

return $ (S\alpha, S_\circ S_1 \circ S_2)$

There are other hard cases, but I will be accepting an answer if it is about at least this one.

What are the sets on which norm-closedness implies weakly closedness?

Let $ X$ be a Banach space. Let $ C$ be a convex, and normed-closed subset of $ X$ . It is well-known that $ C$ becomes weakly closed subset of $ X$ . I want to know is there any well-know class of non convex sets which has this property?

i.e., a class of sets in $ X$ , not necessary convex on which, norm-closed implies weakly-closed.