Difference between regular grammar and CFG in generating computation histories and $\Sigma^*$

I would like to ask for intuition behind the difference between the way a CFG generates $ \Sigma^*$ and the way a regular grammar generates $ \Sigma^*$ .. I got the examples here from Sipser. Let $ ALL_{CFG}$ refer to the language that a CFG generates $ \Sigma^*$ , and let $ ALL_{REX}$ refer to the regular expression equivalent to a regular grammar which generates $ \Sigma^*$ .

From what I got, we have:

  • $ ALL_{CFG}$ is not decidable, it is also not Turing-recognizable. Let $ \overline{A_{TM}}$ refer to the language that a TM $ M$ does not accept input word $ w$ . We can reduce $ \overline{A_{TM}}$ to $ ALL_{CFG}$ in polynomial time using computation histories. The reduction constructs a CFG which generates all possible words where: 1) the first characters do not match $ w$ , 2) the last characters do not match accepting configurations, and 3) characters do not match valid transitions of $ M$ ‘s configurations. Since the reduction maps $ \overline{A_{TM}}$ to $ ALL_{CFG}$ , and $ \overline{A_{TM}}$ is not Turing-recognizable, $ ALL_{CFG}$ is not Turing-recognizable.

  • $ ALL_{REX}$ is decidable since it is decidable if a finite automaton accepts $ \Sigma^*$ . However, any regular language $ R$ can be mapped to the language $ ALL_{REX} \cap f(R_M)$ , where $ R_M$ is a TM that decides $ R$ , and $ f(R_M)$ is a similar reduction of computation histories as outlined above. In more detail, $ f(R_M)$ is the regular grammar that generates all possible words where 1) the first characters do not match $ w$ , 2) the last characters do not match rejecting configurations, and 3) characters do not match valid transitions of $ R_M$ ‘s configurations. The decider for $ ALL_{REX} \cap f(R_M)$ checks if $ f(R_M)$ is equal to $ \Sigma^*$ .

So, I would like to ask:

From above, both regular grammars and CFG could generate computation histories of a TM. But what is it with the CFG’s grammar structure that makes it valid to reduce $ \overline{A_{TM}}$ to $ ALL_{CFG}$ , but it is not possible for $ \overline{A_{TM}}$ to be reduced to $ ALL_{REX} \cap f(A_{TM})$ ? I know that we cannot reduce $ \overline{A_{TM}}$ to $ ALL_{REX} \cap f(A_{TM})$ since $ ALL_{REX} \cap f(A_{TM})$ is decidable, while $ \overline{A_{TM}}$ is not Turing-recognizable… But I would like to ask in terms of the difference in grammar rules between CFG’s and regular grammars.

Is this correct : whether or not a type 3 grammar generates $\Sigma^*$ is not c.e

An example from Sipser’s book, Introduction to the Theory of Computation, shows that it is not decidable for a $ TM$ to recognize whether a $ CFG$ (or a type 2 grammar) generates $ \Sigma^*$ , where $ \Sigma$ is the alphabet. Call this language $ CFG_{all}$

But the above language is also not computably enumerable. There can be a reduction from $ CFG_{all}$ to $ \bar{A_{TM}}$ , where $ \bar{A_{TM}}$ is the language s.t. the input $ TM$ does not accept any input. $ \bar{A_{TM}}$ is not computably enumerable.

But could we say that whether or not a type 3 grammar generates $ \Sigma^*$ is also not c.e. ? (since type 3 grammars are a subset of context-free grammars). While it is true that a finite automaton can recognize $ \Sigma^*$ , this language is different right from whether a type 3 grammar generates $ \Sigma^*$ ?

Just to clarify the source of my confusion, in summary, it is decidable for a $ TM$ to decide whether a pushdown automaton recognizes $ \Sigma^*$ or accepts any input, but it is not decidable or even computably enumerable for a $ TM$ to recognize that a CFG generates $ \Sigma^*$ . Similarly, it is decidable for a $ TM$ to check if a finite automaton accepts $ \Sigma^*$ , but it may not be decidable for a $ TM$ to check if a type 3 grammar generates $ \Sigma^*$ . It’s somehow the difference between recognizing and generating.

What is the density of a regular language $L$ over an alphabet $\Sigma$ in $\Sigma^n$?

In other words, what is the likelihood that a recognizer of a given regular language will accept a random string of length $ n$ ?

 

If there is only a single non-terminal $ A$ , then there are only two kinds of rules:

  1. Intermediate rules of the form $ S \to \sigma S $ .
  2. Terminating rules of the form $ S \to \sigma $ .

Such a grammar can then be rewritten in shorthand with exactly two rules, thusly:

$ $ \left\{\begin{align} &S \enspace \to \enspace \{\sigma, \tau, \dots\} S = ΤS\ &S \enspace \to \enspace \{\sigma, \tau, \dots\} = Τ’\ \end{align}\right.\ \space \ (Τ, Τ’ \subset \Sigma) $ $

So, we simply choose one of the $ Τ$ (this is Tau) symbols at every position, except for the last one, which we choose from $ Τ’$ .

$ $ d = \frac {\lvert Τ\rvert^{n – 1} \lvert Τ’ \rvert} {\lvert\Sigma\rvert^n} $ $

I will call an instance of such language $ L_1$ .

 

If there are two non-terminals, the palette widens:

  1. Looping rules of the form $ S \to \sigma S $ .
  2. Alternating rules of the form $ S \to \sigma A $ .
  3. Terminating rules of the form $ S \to \sigma $ .
  4. Looping rules of the form $ A \to \sigma A $ .
  5. Alternating rules of the form $ A \to \sigma S $ .
  6. Terminating rules of the form $ A \to \sigma $ .

In shorthand: $ $ \left\{\begin{align} &S \enspace \to \enspace Τ_{SS} S \ &S \enspace \to \enspace Τ_{SA} A \ &S \enspace \to \enspace Τ_{S\epsilon} \ &A \enspace \to \enspace Τ_{AA} A \ &A \enspace \to \enspace Τ_{AS} S \ &A \enspace \to \enspace Τ_{S\epsilon} \ \end{align}\right.\ \space \ (Τ_{SS}, Τ_{SA}, Τ_{S\epsilon}, Τ_{AA}, Τ_{AS}, Τ_{S\epsilon} \subset \Sigma) $ $

Happily, we may deconstruct this complicated language into words of the simpler languages $ L_1$ by taking only a looping rule and either an alternating or a terminating shorthand rule. This gives us four languages that I will intuitively denote $ L_{1S}, L_{1S\epsilon}, L_{1A}, L_{1A\epsilon}$ . I will also say $ L^n$ meaning all the sentences of $ L$ that are $ n$ symbols long.

So, the sentences of this present language (let us call it $ L_2$ ) consist of $ k$ alternating words of $ L_{1S}$ and $ L_{1A}$ of lengths $ m_1 \dots m_k, \sum_{i = 1 \dots k}m_i = n$ , starting with $ L_{1S}^{m_1}$ and ending on either $ L_{1S\epsilon}^{m_k}$ if $ k$ is odd or otherwise on $ L_{1A\epsilon}^{m_k}$ .

To compute the number of such sentences, we may start with the set $ \{P\}$ of integer partitions of $ n$ , then from each partition $ P = \langle m_1\dots m_k \rangle$ compute the following numbers:

  1. The number $ p$ of distinct permutations $ \left(^k_Q\right)$ of the constituent words, where $ Q = \langle q_1\dots\ \rangle$ is the number of times each integer is seen in $ P$ . For instance, for $ n = 5$ and $ P = \langle 2, 2, 1 \rangle$ , $ Q = \langle 1, 2 \rangle$ and $ p = \frac{3!}{2! \times 1!} = 3$

  2. The product $ r$ of the number of words of lengths $ m_i \in P$ , given that the first word comes from $ L_{1S}$ , the second from $ L_{1A}$ , and so on (and accounting for the last word being of a slightly different form):

    $ $ r = \prod_{i = 1, 3\dots k – 1}\lvert L_{1S}^{m_i} \rvert \times \prod_{i = 2, 4\dots k – 1}\lvert L_{1A}^{m_i} \rvert \times \begin{cases} & \lvert L_{1S\epsilon}^{m_k} \rvert &\text{if $ m$ is odd}\ & \lvert L_{1A\epsilon}^{m_k} \rvert &\text{if $ m$ is even}\ \end{cases} $ $

If my thinking is right, the sum of $ p \times r$ over the partitions of $ n$ is the number of sentences of $ L_2$ of length $ n$ , but this is a bit difficult for me.

 

My questions:

  • Is this the right way of thinking?
  • Can it be carried onwards to regular grammars of any complexity?
  • Is there a simpler way?
  • Is there prior art on this topic?

prove $ L=\{M\ |\ M\ is\ a\ TM\ and\ \forall.x\in \Sigma^*\ with\ |x|>2,\ M\ on\ x\ runs\ at\ most\ 4|x|^2\ steps\}\notin R$

I am trying to prove that the language

$ $ L=\{M\ |\ M\ is\ a\ TM\ and\ \forall.x\in \Sigma^*\ with\ |x|>2,\ M\ on\ x\ runs\ at\ most\ 4|x|^2\ steps\} $ $

belongs to $ Co-RE$ but not to $ R$ .

Showing $ \bar{L}\in RE$ is pretty much straight forward, but I also want to show that $ L\notin{R}$

My idea was a reduction $ \bar{H_{TM}}\le_mL$ but I struggle to figure out how to do it.

Any help/guidance will be much appreciated.

If $\lambda \notin \sigma(A)$ then $\lambda \in \sigma (B)$ iff $\lambda \in \sigma_p(B)$ or $\overline{\lambda} \in \sigma_p(B^*)$

Let $ A$ , $ B$ and $ C$ the differential operators defined by $ $ D(A)=H^{2n}(\mathbb{R}), \ D(B)=H^{2n}(0, \infty)\cap H^{n}_0(0, \infty), D(C)=H^{2n}(-\infty,0)\cap H^{n}_0(-\infty,0),$ $ where $ H^{k}(I)$ is the Sobolev space $ W^{k,2}(I)$ , and $ $ Af:=Tf, Bg:=Tg, Ch:=Th \mbox{ for } \ f \in D(A), \ g \in D(B), \ h \in D(C),$ $ where $ $ T:=\sum_{r=0}^{2n}a_r\frac{d^r}{dx^r}$ $ with $ a_r$ complex constants.

I know that $ \sigma(A)=\sigma_{ess}(A)$ , i.e., $ \lambda \in \sigma(A)$ iff $ A-\lambda I$ is not a Fredholm operator (an operator is Fredholm if its range is closed and both its kernel and its cokernel are finite-dimensional).

I need to show that if $ \lambda \notin \sigma(A)$ then $ \lambda \in \sigma (B)$ iff $ \lambda$ is an eigenvalue of $ B$ or $ \overline{\lambda}$ is an eigenvalue of $ B^{*}$ . I’m reading a proof of that assertion, but I don’t understand the argument. The proof is the following:

Since $ B\oplus C$ differs from $ A$ only by imposing homogeneous Dirichlet boundary conditions at $ 0$ (I think that I can say that $ A$ is a finite-dimensional extension of $ B\oplus C$ ), the difference of the resolvents is of finite rank (If $ A$ is a finite-dimensional extension of $ B\oplus C$ , the difference of the resolvents of that operators is of finite rank (Lemma 6.32 on page 188 of Kato’s book)). So if $ \lambda \notin \sigma(A)$ then it follows that $ B-\lambda I$ and $ C-\lambda I$ are Fredholm operators (why?). Thus $ \lambda \in \sigma (B)$ iff $ \lambda$ is an eigenvalue of $ B$ or $ \overline{\lambda}$ is an eigenvalue of $ B^{*}$ (why?).

Thank you for reading my question. Can you help me to understand the proof?

Showing $\min_{\lambda \in \sigma(A)} |\lambda| \le \vert y^\ast \Sigma y \vert \le \max_{\lambda \in \sigma(A)} |\lambda|$ $\forall y$


Problem

Suppose $ A$ is $ n \times n$ Hermitian matrix, and unitarily diagonalized as follows

$ $ A = U\Sigma U^\ast $ $

where $ \Sigma$ is the diagonal matrix with eigenvalues and $ U$ is the matrix of orthonormal eigenvectors.

Show, for $ \forall y$ such that $ \Vert y \Vert_2 = 1$ ,

$ $ \min_{\lambda \in \sigma(A)} |\lambda| \le \vert y^\ast \Sigma y \vert \le \max_{\lambda \in \sigma(A)} |\lambda|$ $

where $ \sigma(A)$ denotes the set of all eigenvalues of $ A$ .


Try

Let $ u_1, \cdots, u_n$ be the columns of $ U$ . Then

$ $ y = \sum_{i=1}^n a_i u_i $ $

with $ \sum_i a_i^2 = 1$ . Thus

$ $ \vert y^\ast \Sigma y \vert = \vert \sum_{i=1}^n a_i^2 \lambda_i \vert $ $

but I’m not sure we can say

$ $ \min_{\lambda \in \sigma(A)} |\lambda| \le \vert \sum_{i=1}^n a_i^2 \lambda_i \vert \le \max_{\lambda \in \sigma(A)} |\lambda|$ $

Any help will be appreciated.

Why is $ \sigma (g) = \phi (g) g $ for every $ \phi \in \mathrm{Hom} (G,Z) $?

Let $ G $ be a group.

Let $ Z $ be the center of $ G $ which is not finite, such that $ G/Z $ is a finite group.

If $ \sigma \in \mathrm{Aut} ( G ) $ , then $ \sigma ( Z ) = Z $ , hence, we obtain the two following morphisms : $ \mathrm{Aut} ( G ) \to \mathrm{Aut} ( Z ) $ and $ \mathrm{Aut} ( G ) \to \mathrm{Aut} ( G / Z ) $

Question :

Why is :

$ \sigma \in \ker ( \mathrm{Aut} ( G ) \to \mathrm{Aut} ( G/Z ) $ if and only if $ \sigma (g) = \phi (g) g $ for every $ \phi \in \mathrm{Hom} (G,Z) $ ?

Thanks in advance for your help.

Prooving equations are non derivable in Sigma algebra

Let Σ be the signature made up from the following symbols.

e: 0argument function (constant symbol) f: 2 argument function g: 1 argument function

Variable set Var is made up from x,y,z

Let E be the set of the following equations

f(x,f(y,z)) = f(f(x,y),z) f(x,e) = x f(g(x),x) = e

How can I prove that the following equations are non-derivable from E

f(e,x) = x f(x,g(x)) = e