## Doesn’t Linearizability implies Serializability?

Serializability is a concurrency scheme where the concurrent transaction is equivalent to one that executes the transactions serially.

In Linearizability, Once write completes, all later reads should return value of that write or value of later write. Once read returns particular value, all later reads should return that value or value of later write.

What is the main difference between Linearizability and Serializability?

## NFA DFA Proof that for any regular Language so L in REG it implies that s(L) IN REG

$$\varphi_{1}(L)=\left\{w \in \Sigma^{*} | \text { there exists an } \alpha \in \Sigma^{*} \text { with }|\alpha|=|w| \text { and } \alpha w \in L\right\}$$ $$\begin{array}{l}{\text { Proof. We want to show that } \varphi_{1}(L) \text { for all languages } L \in \operatorname{REG} \text { is regular. Be there } L \in \operatorname{REG} \text {random. Because } L \text { is regular }} \ {\text { there exists a DFA } M \text { with L}(M)=L \text { We now conctruct out } M \text { a } \lambda\text{NFA}{}\operatorname{with L}\left(M^{\prime}\right)=\varphi_{1}(L) . \text { For that we describe }} \ {\text { the working method } M^{\prime} \text { formal the condition space of } M^{\prime} \text { is (conceptional) those of } M . \text { The calculation }} \ {\text { of } M^{\prime} \text { functions (conceptional) as follows: instead of a stone to the beginning of the calculation places ond} q_{0} \text { , }}\end{array}$$ $$\begin{array}{l}{\text { we place 3 stones (1 white, 1 red and 1 blue) on the states of} M . \text { We place the blue one }} \ {\text { on } q_{0} \text { , and the white on a not det. guessed } q_{i} \text { (both stones on the same condition). }} \ {\text { The calculation of } M^{\prime} \text { on Entry } w \in \Sigma^{*} \text { functions (conceptional) as follow:

Could someone please help me with that? Researched regular Expressions DFA + NFA + Converting it but still don’t know how to solve that.

## Equivalence from multi-tape to single-tape implies limited write space?

Suppose I have the following subroutine, to a more complex program, that uses spaces to the right of the tape:

$$A$$: “adds a $at the beginning of the tape.” So we have: $$\begin{array}{lc} \text{Before }: & 0101\sqcup\sqcup\sqcup\ldots\ \text{After }A: & 0101\sqcup\sqcup\ldots \end{array}$$ And it is running on a multi-tape turing machine. The equivalence theorem from Sipser book proposes we can describe any multi-tape TM with a single-tape applying the following “algorithm”, append a $$\#$$ at every tape and then concat every tape in a single tape, and then put a dot to simulate the header of the previous machine, etc, etc. With $$a$$ and $$b$$ being the content of other tapes, we have: $$a\#\dot{0}101\#b$$ If we want to apply $$A$$ or another subroutine that uses more space, the “algorithm” described in Sipser is not enough, I can intuitively shift $$\#b$$ to the right, but how to describe it more formally for any other subroutine that uses more space in the tape? I can’t figure out a more general “algorithm” to apply in this cases. ## BosonSampling:$\# P \subseteq FBPP^{{NP}^{\mathcal{O}}}$implies$P^{\#P}\subseteq BPP^{{NP}^{\mathcal{O}}}$I am a complexity beginner, actually a quantum physicist. In their famous BosonSampling paper, Aaronson and Arkhipov show amongst other things a polynomial time machine solving the problem of BosonSampling exactly, would result in the collapse of the Polynomial Hierarchy. If you are not familiar with BosonSampling or any quantum physics at all, do not bother, it won’t be necessary for this question: I am interested in one particular aspect of the argument, that I do not understand because of my lack of background: It seems to have to do with the relation of function/search problems and corresponding(?) decision problems. Specifically, on page 33, in the the proof of Theorem 1, they show that a #$$P$$-hard problem is also contained $$FBPP^{{NP}^{\mathcal{O}}}$$ where $$\mathcal{O}$$ is an oracle to some problem called BosonSampling. From there, they immediately seem to get that $$P^{\#P}\subseteq BPP^{{NP}^{\mathcal{O}}}$$ My question is: How? I understand that there is a difference between counting/function problem complexity classes like $$\#P$$ or $$FBPP$$ and classes for decision problems. But how to relate the results? In particular, why is it $$P^{\#P}$$ on the left? ## Bounded treewidth implies bounded clique-width We have a graph G of treewidth $$\operatorname{tw}(G)\leq k$$, for some $$k\in\mathbb{N}$$. I’ve seen a claim that that implies that the clique-width of the same graph is at most $$k \cdot 2^k$$. This implies that given a tree decomposition of the graph we can construct a proper expression tree using at most $$k \cdot 2^k$$ labels. I’m guessing that the $$k$$ in the decomposition allows us to “construct” a node of the tree decomposition, i.e. regardless how the vertices in one bag are connected, we can easily connect them since we have $$k+1$$-ish labels at our disposal. However, I don’t quite see how we can use that fact to construct the complete expression tree (and therefore the $$k\cdot 2^k$$-expression). How can we construct the proper expression and therefore prove that graphs of bounded treewidth are a subclass of graphs of bounded clique-width ## PH=PSPACE implies collapse of PH? If PH=PSPACE, does this imply that the PH collapses to one of its levels? (somehow I get an error message when posting “question doesn’t meet quality standard” so I add this sentence in order to workaround it) ## Proving that the failure of algorithm W implies that the program is not typable How one does prove that if algorithm W failed for a given program $$e$$ and context $$\Gamma$$, then there is no substitution $$S$$ and type $$\tau$$ such that $$S\Gamma \vdash e : \tau$$ ? The original paper states that from the completeness proof one can derive that “it is decidable whether $$e$$ has any type at all under the assumptions $$\Gamma$$“. However, I didn’t found this proof in the literature. The algorithm W has some failures cases: the unification algorithm failed, an identifier was not found in the context, a recursive call failed, etc. I more interested in the hard cases, the easy ones I can do myself. One hard case seems to be the failure of the unification. In this case we know about the soundness and completeness of both recursive calls and, also, the non-existence of a unifier for $$S_2\tau_1$$ and $$\tau_2\rightarrow \alpha$$. How those informations can be used to prove $$\neg \exists \tau \:S, S\Gamma \vdash e_1 \: e_2 : \tau$$ ? This part of algorithm W may be relevant here: $$W(\Gamma, e_1\: e_2)$$ = $$(\tau_1,S_1) \leftarrow W(\Gamma, e_1)$$ $$(\tau_2,S_2) \leftarrow W(S_1\Gamma, e_2)$$ $$S \leftarrow unify(S_2\tau_1, \tau_2\rightarrow \alpha)$$ where $$\alpha$$ is fresh return $$(S\alpha, S_\circ S_1 \circ S_2)$$ There are other hard cases, but I will be accepting an answer if it is about at least this one. ## What are the sets on which norm-closedness implies weakly closedness? Let $$X$$ be a Banach space. Let $$C$$ be a convex, and normed-closed subset of $$X$$. It is well-known that $$C$$ becomes weakly closed subset of $$X$$. I want to know is there any well-know class of non convex sets which has this property? i.e., a class of sets in $$X$$, not necessary convex on which, norm-closed implies weakly-closed. ##$NTime(2^n) \subseteq NSpace(n^k)$implies$EXP = PSPACE\$?

Assume $$NTime(2^n) \subseteq NSpace(n^k)$$, for some fixed $$k$$. Is it possible to imply that $$EXP = PSPACE$$? and what about $$NEXP = PSPACE$$? It seems the answer might be YES, because this question seems to be equivalent to the open question $$EXP=PSPACE$$?
Can it be shown using some padding argument?

## Find the cocycle condition in fppf descent implies Galois descent

I tried to give a proof that fppf (faithfully flat) descent implies Galois descent purely at the level of modules and I stumble to obtain the Galois cocycle condition. I’m interested to consider some questions of twisted sheaves with a Galois cohomological description and understanding how to obtain the former would be useful to me.

I obtained the following the following conditions: Given a finite Galois extension $$L/K$$ of Galois group $$G$$ and $$M$$ an $$L$$-vector space $$M$$, we have for each $$\sigma \in G$$ an isomorphism of $$L$$-vector spaces $$\psi_\sigma : M \to M^\sigma$$ satisfying $$\psi_\sigma(am) = \sigma(a) \psi_\sigma(m)$$, where $$a \in L$$ and $$m \in M$$ and such that for every pair $$(\sigma, \tau) \in G \times G$$ we have

$$\psi_{ ( \sigma, \tau), (\sigma, \tau), \sigma } \circ \psi_{ ( \sigma, \tau), (\sigma, \tau), \tau } = \psi_{ ( \sigma, \tau), (\sigma, \tau), \sigma \tau }$$

as isomorphisms of $$L$$-modules. The $$L$$-module structure of $$M^\sigma$$ is twisted by $$\sigma$$, i.e given by $$a \cdot m:= \sigma(a)m$$.

My issue is that because of what one obtains for sheaves of modules (https://stacks.math.columbia.edu/tag/0CDQ) I would expect the cocycle condition for modules to also have a twisting in the formula.

Translating what is done in stacks project into modules is certainly possible but I wasn’t able to do so. Instead I will present a (somehow long) sketch of what I did. I can provide more details upon request and I apologize if there are too many. My difficulty is on Step 5.

The context: Let $$L/K$$ be a finite Galois extension and let $$M$$ be an $$L$$-module together we an isomorphism of $$L \otimes_K L$$-modules $$\phi: M \otimes_K L \to L \otimes_K M$$ satisfying the cocyle condition $$p_{13}^* \phi = p_{23}^* \phi \circ p_{12}^* \phi$$ as isomorphisms of $$L \otimes_K L \otimes_K L$$-modules.

What I did:

Step 1: Describing some isomorphisms We have an isomorphism of $$K$$-algebras $$L \otimes_K L \to \prod_{\sigma \in G} L$$ given by $$a \otimes 1 \mapsto ( a )_{\sigma \in G}$$ and $$1 \otimes a \mapsto ( \sigma(a) )_{\sigma \in G}$$

and another one $$L \otimes_K L \otimes_K L \to \prod_{\sigma \in G} \Big( \prod_{\tau \in G} L \Big)$$ given by

$$a \otimes 1 \otimes 1 \mapsto \Big( (a)_{\tau \in G} \Big)_{\sigma \in G},$$

$$1 \otimes a \otimes 1 \mapsto \Big( (\tau(a)_{\tau \in G} \Big)_{\sigma \in G},$$

$$1 \otimes 1 \otimes a \mapsto \Big( \tau\sigma(a)_{\tau \in G} \Big)_{\sigma \in G}.$$

We can then describe the above isomorphisms as

$$(1_L \coprod \sigma) : L \otimes_K L \to \prod_{\sigma \in G} L$$

and

$$(1_L \coprod \tau \coprod \tau \sigma): L \otimes_K L \otimes_K L \to \prod_{\sigma \in G} \Big( \prod_{\tau \in G} L \Big).$$

Step 2: Obtain some $$\prod_{\sigma \in G} L$$-module structures

We then have a commutative diagram of modules.

$$\begin{array}{ccccc} M \otimes_K L & \xrightarrow{} & \prod_{\sigma \in G} \Big( M \otimes_K L \Big) & \xrightarrow{} & \prod_{\sigma \in G} M\ \downarrow & & \downarrow & & \downarrow \ L \otimes_K M & \xrightarrow{} & \prod_{\sigma \in G} \Big( L\otimes_K M \Big) & \xrightarrow{} & \prod_{\sigma \in G} M^\sigma \end{array}$$

where the left most vertical arrow is $$\phi$$ and we denote by $$\psi$$ the induced the right most vertical arrow.

Using Step 1 we have ring morphisms $$L \xrightarrow{1_L} L$$ and $$L \xrightarrow{\sigma} L$$ for each $$\sigma \in G$$. Tensoring $$M$$ with these morphisms give $$L$$-module structures for $$M \otimes_K L$$ by $$a \cdot ( m \otimes c ) = m \otimes ac$$ and for $$L \otimes_{L,\sigma} M$$ by $$a \cdot (c \otimes m) = \sigma(a)c \otimes m$$. Now the isomorphism of $$L$$-modules $$\mu: M \otimes_L L \to M: m \otimes c \mapsto cm$$ then gives to $$M$$ the $$L$$-module structure $$a \otimes m = am$$. We also have the composite diagram

$$L \otimes_{L,\sigma} M \xrightarrow{ \sigma^{-1} \otimes 1_M } L \otimes_L M \xrightarrow{\mu’} M : c \otimes m \mapsto \sigma^{-1}(c) \otimes m \mapsto \sigma^{-1}(c)m.$$

Then this $$M$$ has $$L$$-module structure given by $$a \cdot m := \sigma(a)m$$ and we denote it by $$M^\sigma$$ (We can relabel to get $$M^\sigma$$ instead of $$M^{\sigma^{-1}}$$.).

Now a structure of $$\prod_{\sigma \in G} L$$-module on $$\prod_{\sigma \in G} M$$ (resp. on $$\prod_{\sigma \in G} M^\sigma$$) is determined by an $$L$$-module structure on $$M$$ (resp. on $$M^\sigma$$) for each $$\sigma \in G$$. Therefore, if $$(a_\sigma)_{\sigma \in G} \in \prod_{\sigma \in G}$$ and $$(m_\sigma)_{\sigma \in G} \in \prod_{\sigma \in G} M$$ (resp. in $$\prod_{\sigma \in G} M^\sigma$$), then

$$(a_\sigma)_{\sigma \in G} \bullet (m_\sigma)_{\sigma \in G} = (~a_\sigma m_\sigma)_{\sigma \in G} ( \text{ resp. } (a_\sigma)_{\sigma \in G} \circ (m_\sigma)_{\sigma \in G} = ( \sigma(a)_\sigma m_\sigma)_{\sigma \in G}~).$$

Step 3: Determine for each $$\sigma \in G$$ the isomorphisms of $$L$$-modules $$\psi_\sigma$$.

The isomorphism $$\psi$$ induced by $$\phi$$ must then satisfy

$$\psi \big( a_\sigma m_\sigma)_{\sigma \in G} ) = (a_\sigma)_{\sigma \in G} \circ \psi( ( m_\sigma)_{\sigma \in G} ).$$

For each $$\sigma \in G$$ we have an isomorphism of $$L$$-modules

$$M \xrightarrow{\iota_\sigma} \prod_{\sigma \in G} M \xrightarrow{\psi} \prod_{\sigma \in G} M^\sigma \xrightarrow{\pi_\sigma} M^\sigma$$

given by

$$am \mapsto (0, \cdots, 0, am, 0, \cdots, 0) \mapsto \big( \sigma(a) \pi_\sigma \Big( \psi( \iota_\sigma(m) \Big) \big)_{\sigma \in G} \mapsto \sigma(a)\pi_\sigma \Big( \psi(m) \Big).$$

So for each $$\sigma \in G$$ we have an isomorphism $$\psi_\sigma : M \to M^\sigma$$ defined by $$\psi_\sigma(m):=\pi_\sigma( \psi(m) )$$ and such that $$\psi_\sigma(am) = \sigma(a)\psi_\sigma(m)$$.

Step 4: Determine some $$\prod_{\sigma \in G} \prod_{\tau \in G} L$$-module structures

I will skip some details, which I can provide upon request. I use the cocycle condition to determine three $$\prod_{\sigma \in G} \prod_{\tau \in G} L$$-module structures.

Consider $$p_{12}^* p_1^* M$$ (or equivalently $$p_{13}^*p_1^*M$$).

The $$\prod_{(\sigma,\tau) \in G \times G } L$$-module $$\prod_{(\sigma,\tau) \in G \times G } M$$ is

$$(a_{g,h}) \cdot ( m_{g,h} ) = ( a_{g,h} m_{g,h} ).$$

Consider $$p_{12}^* p_2^* M$$ (or equivalently $$p_{23}^*p_1^*M$$).

The $$\prod_{(\sigma,\tau) \in G \times G } L$$-module $$\prod_{(\sigma,\tau) \in G \times G } M^\tau$$ is

$$(a_{\sigma,\tau}) \cdot ( m_{\sigma,\tau} ) = ( \tau(a_{\sigma,\tau}) m_{\sigma,\tau} ).$$

Consider $$p_{13}^* p_2^* M$$ (or equivalently $$p_{23}^*p_2^*M$$).

The $$\prod_{(\sigma,\tau) \in G \times G } L$$-module $$\prod_{(\sigma,\tau) \in G \times G } M^{\sigma \tau}$$ is

$$(a_{\sigma,\tau}) \cdot ( m_{\sigma,\tau} ) = ( (\sigma \circ \tau)(a_{\sigma,\tau}) m_{\sigma,\tau} ).$$

Step 5: Determine the cocycle condition

Finally, for each pair $$(\sigma, \tau) \in G \times G$$ we have three composite maps given as follows:

$$M_{(\sigma, \tau)} \xrightarrow{ \iota_{(\sigma,\tau)}} \prod_{(\sigma,\tau) \in G \times G} M \xrightarrow{ p_{12}^* \psi } \prod_{(\sigma,\tau) \in G \times G} M^\tau \xrightarrow{ \pi_{\sigma,\tau}} M_{(\sigma, \tau)}^\tau$$

defining an $$L$$-module isomorphism $$\psi_{ ( \sigma, \tau), (\sigma, \tau), \tau } : M_{(\sigma,\tau)} \to M_{(\sigma,\tau)}^\tau$$ satisfying

$$\psi_{ ( \sigma, \tau), (\sigma, \tau), \tau }(am) = \tau(a)\psi_{ ( \sigma, \tau), (\sigma, \tau), \tau }(m)$$

$$M_{(\sigma, \tau)} \xrightarrow{ \iota_{(\sigma,\tau)}} \prod_{(\sigma,\tau) \in G \times G} M \xrightarrow{ p_{13}^* \psi } \prod_{(\sigma,\tau) \in G \times G} M^{\sigma \tau} \xrightarrow{ \pi_{\sigma,\tau}} M_{(\sigma, \tau)}^{\sigma\tau}$$

defining an $$L$$-module isomorphism $$\psi_{ ( \sigma, \tau), (\sigma, \tau), \sigma \tau } : M_{(\sigma,\tau)} \to M_{(\sigma,\tau)}^{\sigma \tau}$$ satisfying

$$\psi_{ ( \sigma, \tau), (\sigma, \tau), \tau }(am) = (\sigma \circ \tau)(a)\psi_{ ( \sigma, \tau), (\sigma, \tau), \tau }(m)$$

and

$$\psi_{ ( \sigma, \tau), (\sigma, \tau), \tau }(am) = \tau(a)\psi_{ ( \sigma, \tau), (\sigma, \tau), \tau }(m)$$

$$M_{(\sigma, \tau)}^\tau \xrightarrow{ \iota_{(\sigma,\tau)}} \prod_{(\sigma,\tau) \in G \times G} M^\tau \xrightarrow{ p_{23}^* \psi } \prod_{(\sigma,\tau) \in G \times G} M^{\sigma \tau} \xrightarrow{ \pi_{\sigma,\tau}} M_{(\sigma, \tau)}^{\sigma \tau}$$

defining an $$L$$-module isomorphism $$\psi_{ ( \sigma, \tau), (\sigma, \tau), \sigma } : M_{(\sigma,\tau)}^\tau \to M_{(\sigma,\tau)}^{\sigma \tau}$$ satisfying

$$\psi_{ ( \sigma, \tau), (\sigma, \tau), \sigma }(am) = \sigma(a)\psi_{ ( \sigma, \tau), (\sigma, \tau), \tau }(m).$$

Indeed, $$\psi_{ ( \sigma, \tau), (\sigma, \tau), \sigma }$$ sends $$\tau(a)m$$ to $$(\sigma \circ \tau)(a)m$$ and so sends $$am = \tau(\tau^{-1}(a))m$$ to $$\sigma( am )$$.

Since $$p_{23}^* \phi \circ p_{12}^*\phi = p_{13}^* \phi$$, we have $$p_{23}^* \psi \circ p_{12}^*\psi = p_{13}^* \psi$$ and therefore for each pair $$(\sigma, \tau) \in G \times G$$ we have

$$\psi_{ ( \sigma, \tau), (\sigma, \tau), \sigma } \circ \psi_{ ( \sigma, \tau), (\sigma, \tau), \tau } = \psi_{ ( \sigma, \tau), (\sigma, \tau), \sigma \tau }$$

as isomorphisms of $$L$$-modules.

Remarks: The map $$\psi_{ ( \sigma, \tau), (\sigma, \tau), \sigma }$$ is from $$M^\tau$$ to $$M^{\sigma \tau}$$ and it is not clear to me how to re-express it as starting from $$M$$ and twisting it by $$\tau$$. Another problem is that the maps I found on Step 3 are not in an obvious way related to those of Step 5 and there might be a need to twick something here as well.