determining decidability with intersection of context free languages

I am trying to solve this problem: Given context-free languages A, B, and C find if the language $ (A\cap B)\cup (B\cap C)$ is empty. Is this problem decidable.

I know the CFG is not closed under intersection so we do not know that $ A\cap B$ is also CFG. If it was CFG, I know how to prove decidability. Since we can’t determine about $ A\cap B$ , is there a way to prove whether or not this is decidable?

GATE CSE 2018, Which of the following languages are context-free?


A] {ambncpdq | m+p = n+q, where m, n, p, q >=0}

B] {ambncpdq | m = n and p = q, where m, n, p, q >=0}

C] {ambncpdq | m = n = p and p not= q, where m, n, p, q >=0}

D] {ambncpdq | mn = p+q, where m, n, p, q >=0}

Instead of Formal proofs if any answer can provide telltale signs of a language being CFL or not will be more appreciated, because this question is taken from a competitive exam. Formal proofs are welcome too.

Here is my approach

A] I don’t have any idea for constructing PDA for this language, probably not a CFL. Also by pumping lemma splitting the word in ‘uvwxy’ for any ‘vwx’ (belonging to ‘akbk‘ and such combinations), the equality won’t hold. Thus not a CFL.

B] Is a CFL we can push a’s and pop on b’s, same for c’s and d’s.

C] Not a CFL, as we need two stacks, one for keeping count of |a|==|b| and second for |b|==|c|.

D] Not a CFL, cause we can’t construct a PDA, such a PDA will require capability to move back and forth on input string (for keeping track of m*n), which is not possible.

examples for languages of natural numbers

I need to find examples for language $ L_i$ $ i\in[1,3]$ of natural numbers that is:

  1. $ L_1\in$ $ RE \backslash R$

  2. $ L_2\in$ $ coRE\backslash R$

  3. $ L_3\in$ $ \overline{ R \cup RE}$

My idea was in any case to take a language in the desired language class, and find some sort of function from the language to the natural numbers. But I do not know if this is the right way to approach the problem, and how exactly to find such function.

۞MOST COMPLETE GSA Keyword Scraping List۞ • 10+ Major Languages • CREATE Your Own GSA Site Lists◄

Introductory Video

image
image

RESULTS FROM USERS

avatar30935_2.gif

Review from Matt Borden (Scrapebox Support)
Wanted to post back and say that I have found great value in the list and am just now getting started. I have found myself using the list regularly, my only issue is sometimes I have to stop and focus because this list provides so many options that I get overwhelmed, lol.

I have mostly used English but I have merged in some of the other languages on tests. I love the idea of so many languages, for me scraping with such diversity is huge. Im testing Scrapebox 2.0 at the moment and its 64bit (or it has a 64bit version) so I have been able to merge more footprints with the large keyword lists and come up with millions of combinations. I plan on taking some of my good footprints and just working thru the various languages.

Its been weeks now and I am still only taping less then 5% of the potential of this list. Its Good Stuff.

VIDEO RUN-THROUGH BY MATT HERE

image

avatar56844_1.gif
Video Review from Devin s4ntos (GSA Support)

Using this list to build up my global site lists with SER. Loaded up the included English keyword list and some article footprints. So far I’ve scraped 7+ million URL’s and haven’t even scraped 1% yet.

gtfsuUX.png

Looks like I have enough keywords I should be able to scrape for the next year or so smile.png
Great list.

Review from kvmcable (BHW Reputable VIP)

I was fortunate enough to receive a review copy of this list and to be honest I thought, yeah yeah yeah, another keyword list, big deal. Well I downloaded it today and at first glance I saw the download was 73 MB and immediate thought WTF. I have a lot of keyword lists but nothing more than a few MBs. What kind of list could possibly be 73 MBs in size?

So still a bit skeptical, I downloaded and unzipped the list, yes the 73 MB is compressed size. I unzipped this file and was impressed once again; after extraction the Keyword Lists totaled 312 MB!
yhn5.png

Holy Crap I couldn’t wait to check the lists. I have a lot of tools that use Keyword Lists but currently I’m on a Tumblr scrapping kick so I grabbed a 55K list and dropped it in Gscraper to see what it dug up in unique tumblr blogs. After about an hour I had the results, 208,807 unique tumblr blogs:

5gq1.png

I went about my usual routine and did a HTTP test to see how many dead blogs were in my 208K list. It ran my average of 1-2% expectation with 2,479 dead tumblr blogs scraped from this small sample of FuryKyle’s Keyword Lists.

zz9o.png

Now came the real test of how many of my dead tumblr blogs had valid PR still available. I was quite impressed to see this small (55K) keyword list discovered 523 dead tumblr blogs with PR1-4 available. All in about an hour’s worth of work!

bwdc.png

I’m really impressed and I haven’t even put a dent into the lists containing over 1 Billion Keywords (for sure there are that many). I’m now considering another license of gscraper to work over these keyword lists. You could scrape 24/7 for weeks and not run out of keywords. I’m truly impressed and can’t wait to toss some of these keywords in GSA.

For $ 20 this is a no-brainer for guys using tools like Gscraper and GSA for scraping and back-linking. The only thing that surprises me is why this isn’t priced much higher. Honestly it’s that good!

Relations between deciding languages and computing functions in advice machines

I’m trying to understand implications of translating between functions and languages for P/Poly complexity. I’m not sure whether the following all makes sense. Giving it my best shot given my current understanding of the concepts. (I have a project in which I want to discuss Hava Siegelmann’s analog recurrent neural nets, which recognize languages in P/Poly, but I’d like to understand and be able to explain to others implications this has for computing functions.)

Suppose I want to use an advice Turing $ T_1$ machine to calculate a function from binary strings to binary strings $ f: {0,1}* \rightarrow {0,1}*$ . $ T_1$ will be a machine that can compute $ f$ in polynomial time given advice that is polynomial-size on the length of arguments $ s$ to $ f$ , i.e. $ f$ is in P/Poly. (Can I say this? I have seen P/Poly defined only for languages, but not for functions with arbitrary (natural number) values.)

Next suppose I want to treat $ f$ as defining a language $ L(f)$ , by encoding its arguments and corresponding values into strings, where $ L(f) = \{\langle s,f(s)\rangle\}$ and $ \langle\cdot,\cdot\rangle$ encodes $ s$ and $ f(s)$ into a single string.

For an advice machine $ T_2$ that decides this language, the inputs are of length $ n = |\langle s,f(s)\rangle|$ , so the relevant advice for such an input will be the advice for $ n$ .


Question 1: If $ T_1$ can return the result $ f(s)$ in polynomial time, must there be a machine $ T_2$ that decides $ \{\langle s,f(s)\rangle\}$ in polynomial time? I think the answer is yes. $ T_2$ can extract $ s$ from $ \{\langle s,f(s)\rangle\}$ , and then use $ T_1$ to calculate $ f(s)$ , and then encode $ s$ with $ f(s)$ and compare it with the original encoded string. Is that correct?


Question 2 (my real question): If we are given a machine $ T_2$ that can decide $ \{\langle s,f(s)\rangle\}$ in polynomial time, must there be a way to embed $ T_2$ in a machine $ T_3$ so that $ T_3$ can return $ f(s)$ in polynomial time?

I suppose that if $ T_2$ must include $ T_1$ , then the answer is of course yes. $ T_3$ just uses the capabilities of $ T_1$ embedded in $ T_2$ to calculate $ f(s)$ . But what if $ T_2$ decides $ L(f)$ some other way? Is that possible?

If we are given $ s$ , we know its length, but not the length of $ f(s)$ . So in order to use $ T_2$ to find $ f(s)$ , it seems there must be a sequential search through all strings $ s_f = \{\langle s,r\rangle\}$ for arbitrary $ r$ . (I’m assuming that $ f(s)$ is unbounded, but that $ f$ has a value for every $ s$ . So the search can take an arbitrary length of time, but $ f(s)$ will ultimately be found.)

One thought I have is that the search for a string $ s_f$ that encodes $ s$ with $ f(s)$ has time complexity that depends on the length of the result $ f(s)$ (plus $ |s|$ , but that would be swamped when $ f(s)$ is long).

So now the time complexity does not have to do with the length of the input, but only the length of $ f(s)$ . Maybe $ L(f)$ is in P/Poly if $ f$ is in P? (Still confused here.)

Thinking about these questions in terms of Boolean circuits has not helped.

Context free languages invariant by “shuffling” right hand side

Given a grammar for a Context Free language $ L$ , we can augment it by "shuffling" the right hand side of each production, e.g.:

$ A \to BCD$ is expanded to $ A \to BCD \; | \; BDC \; | \; CBD \; | CDB \; | \; DBC \; | \; DCB$

It may happen that the resulting language $ L’$ is equal to $ L$

For example:

Source               Shuffled S -> XA | YB         S -> XA | AX | YB | BY A -> YS | SY         A -> YS | SY B -> XS | SX         B -> XS | SX X -> 1               X -> 1 Y -> 0               Y -> 0 

Is there a name for such class of CF languages ($ L = \text{shuffled}(L)$ ?

Confusion about definition of languages accepted by Turing Machine, very basic question

I’m studying for an upcoming exam and my book gives the following definition:

Let $ M$ be a Turing machine, then the accepted language $ T(M)$ of $ M$ is defined as $ T(M) = \{x \in \Sigma^* \mid z_0 x \vdash^* \alpha z \beta; \alpha, \beta \in \Gamma^*; z \in E\}$ .

As a side note, $ \vdash$ denotes the transition from one configuration of the TM to the next, and the $ ^*$ denotes an arbitrary number of applications of this relation.

What I’m confused about is that under this definition of acceptance, I only have to enter the end state once and even if I leave it, the word would be accepted, or I could loop in this end state. In push down automata or regular automata, we do not have this problem as we move through the word sequentially from beginning to very end, especially in push down automata where the stack is separated from the input word.

Now I read in most other definitions, additionally to ending up in an end state, the Turing machine must also halt, meaning that it must end in a state that has no transitions. Although I’m not sure what this would mean for deterministic Turing machines as they have to have transitions for all configurations of the machine.

To wrap it up:

Question 1: Is halting required? Is it a useful property for accepting languages or is there a reason the definition was given as is?

Question 2: How would you define "halting" for deterministic Turing machines?

Is checking if regular languages are equivalent decidable?

Is this problem algorithmically decidable?

L1 and L2 are both regular languages with alphabet $ \Sigma$ . Does L1 = L2?

I think that it is decidable because you can write regular expressions for each language and see if they are the same. But I’m not sure how to prove it since I see that you prove something is decidable by showing a Turing Machine

Why is the Halting problem decidable for Goto languages limited on the highest value of constants and variables?

This is taken from an old exam of my university that I am using to prepare myself for the coming exam:

Given is a language $ \text{Goto}_{17}^c \subseteq \text{Goto}$ . This language contains exactly those $ \text{Goto}$ programs in which no constant is ever above $ 17$ nor any variable ever above $ c$ .

$ Goto$ here describes the set of all programs written in the $ Goto$ language made up of the following elements:

With variables $ v_i \in \mathbb{N}$ and constants $ c \in \mathbb{N}$
Assignment: $ x_i := c, x_i := x_i \pm c$
Conditional Jump: if(Comparison) goto $ L_i$
Haltcommand: halt

I am currently struggling with the formalization of a proof, but this is what I have come to so far, phrased very casually: For any given program in this set we know that it is finite. A finite program contains a finite amount of variables and a finite amount of states, or lines to be in. As such, there is a finite amount of configurations in which this process can be. If we let this program run, we can keep a list of all configurations we have seen. That is, the combination of all used variable-values and the state of the program. If we let the program run, there must be one of two things that happens eventually: The program halts. In this case, we return YES and have decided that it halts. The program reaches a configuration that has been recorded before. As the language is deterministic, this means that we must have gone a full loop which will exactly repeat now.

No other case can exist as that would mean that we keep running forever on finite code without repeating a configuration. This means after every step, among our list of infinite steps, there is a new configuration. This would mean that there are infinite configurations, which is a contradiction.

Is this correct? Furthermore, how would a more formal proof look if it is? If not, how would a correct proof look?

Effects of limitations of the variables and constants of Goto languages

I am currently looking at a set of problems which all deal with attributes of a certain subset of Goto programs which have certain limitations. They are as follows:

  1. $ \text{Goto}_{17}$ describes the set of all Goto-programs in which no constant is greater than $ 17$ . Show that every Goto-program can be emulated by a $ \text{Goto}_{17}$ -program.
  2. $ \text{Goto}_{17}^{c}$ describes the set of all Goto-programs in which variables can be no higher than c and constants no higher than 17. Why is the Halting-problem decideable for this set of programs?

Following are my thoughts so far on these problems:

  1. It is rather easy to see that any given program can be trivially converted into not using constants higher than 17 by repeating any operation that would do so as often as necessary to evoke the same result. Even comparisons can work by using a dummy variable to store the variables value, then comparing to 17, reducing the variable and so on, until we have compared it against what we want to compare it to. And there will always be a variable easily chosen for this if we just spread out the variables so that in our new $ \text{Goto}_{17}$ -program only every second variable is used for normal calculation. This way we can always work with any variables "dummy-partner" variable for calculations like this without loosing the value. This all feels very unpolished though and I struggle with formulating it in a way that makes it into an actual proof. Am I on the right track and how can this be explained better if yes? How at all, if no?
  2. In this case I am even less confident in my basic idea. We almost have a situation in which I am confident to say that we can just go through every state of the program that is even theoretically possible and decide whether it can hold in that state. But how do I know of a state whether it is also practically possible, or in other words, does the program actually ever reach this constellation of variable values and position in code? We can’t just simulate the program, as infinite loops are still possible here in contrast to more simple languages like loop-languages. Why can the Halting-Problem be solved in this case? What is the method to achieve this? Can we maybe guarantee that on a set of finite amounts of variables (which is given, as the code must be finite) we must at some point reach a situation where we either halt or where our state exactly matches a prior state, as all these variables have a finite amount of states they can be in?