How to convert recursive language grammar tree into automaton for optimal parsing?

So I have a definition of a sort of grammar for a programming language. Some aspects are recursive (like nesting function definitions), other parts of it are just simple non-recursive trees.

Originally I just treat this sort of like a Parsing Expression Grammar (PEG), and parse it using recursive descent. But this is kind of inefficient, as it sometimes has to build up objects as it’s parsing, only to find 10 calls down the road that it isn’t a good match. So then it backs up and has to try the next pattern.

I’m wondering what the general solution to this is. What can be done to create the most optimal algorithm starting from nothing but a tree-like data structure encoding the grammar?

Can you process the tree-like, recursive-ish BNF PEG data structure in a while loop, rather than using recursive descent? Better still, can you convert the data structure into a sort of automaton which you can (in some simple way) transition through, to parse the string without having to go down a bunch of wrong paths and generate stuff you only have to soon throw away if you find it’s a mismatch? Or if not, is there anything that can be done here with automata?

Sorry for my terminology (BNF vs. PEG), I am not using it exactly. I just mean I have a grammar, which is context-sensitive which falls outside the scope of either of these. So I am picking one or the other to simplify this question for the purpose of this site, even though I’m not 100% sure what the difference between BNF and PEG are at this point.

Looking to learn more about Counter automaton?

I have to write a paper for my class about counter automaton, their usage, their impact, and more. and I’m stuck finding good resources to learn about these strange anomalies :). I was wondering if anyone here can help me or guide towards writing a good paper. any good article suggestions or book
ps1: we have learned about pushdown automata and basics in our class

Computing automaton for $L(A) / L(B)$ gives ones for $A,B$

I’m trying to figure out whether infinite language change the answer.

Show that the following language is decidable: $ $ L=\{\langle A,B \rangle : \text{$ A,B$ are DFAs, $ L(B)$ is finite, and $ L(A)/ L(B)=L(0^*1^*)$ }\}.$ $

(I am talking about right division.)

We know how to check whether the language of a DFA is finite, and given two DFAs, we know how to check whether their languages are equal. The algorithms I know to the above problems uses the DFA’s, so it is necessary having the DFA’s in order to decide those problems.

I’m trying to figure out whether $ |L(B)|=\infty$ changes the answer. To the best of my understanding, because $ |L(B)|<\infty$ , we can explicitly construct a DFA that accepts $ L(A)/ L(B)$ , whereas if $ L (B)=\infty$ all we know is about the existence of $ DFA$ that accepts $ L(A)/ L(B)$ .

However, even if $ L(B)$ is an infinite language, since there is a finite number of DFAs, one of which accepts $ L(A) / L(B)$ , I can certainly know that there is a Turing machine that decides the language $ L$ . Right?

When can a Non-Deterministic Finite Automaton with Epsilon transitions considered to be in an accepted state?

A non-deterministic finite automaton is considered to be halted when either the whole input string has been consumed or when we reach a state where no available transition (if any) matches the current character being read.

If the machine halts when it’s in an accepted state and at the same time the whole input has been consumed the input string is considered to be accepted.

Now, when introduce $ \epsilon$ transitions the machine doesn’t necessarily halt when the whole input string has been consumed, for it is possible that there are still $ \epsilon$ transitions available.

Suppose we have a NFA that is in an accepted state and also that the whole input has been consumed, but there are still $ \epsilon$ transitions available in this state, can we considered the input string to be accepted or do we need to "follow the trail" of $ \epsilon$ transitions until we reach a state where no other transition is available?

Good cellular automaton for evolving life?

How has life in our universe arisen? I think basically there was a box of sand and it was shook until robust replicating structures emerged.

One could try to do the same thing with Conway’s game of life, but most of its structures are extremely fragile. A different automaton might be better suited. Maybe one where objects can emerge and travel and persist more easily. Suggestions?

I would want to randomly initialize a large starting state and run it (with noise injection) for a long time and see what kinds of structures emerge.

Is this push-down automaton non-deterministic, as JFLAP states?

There is a tool called JFLAP, which, among other things, can analyze push-down automata, and find non-determinism.

In this example it is detecting non-determinism in state q0: Non-deterministic push-down automaton

The first symbol in the transition represents the symbol read as input; the second symbol represents the symbol extracted from the stack; and the third symbol is the symbol pushed to the stack. λ represents the empty symbol, so this is an empty transition without checking the stack or pushing anything to it.

I am surprised, as that state seems to fulfill the conditions for determinism for push-down automatons (if only because it only contains a single transition!). I would expect the next state to be q1 under any circumstance.

In comparison, JFLAP doesn’t find any non-determinism here: Deterministic push-down automaton

Mind you, the transition is the same, it only changes that this one adds something to the stack. Am I missing something or is JFLAP wrong in the first instance?

How to determine if a language produced by grammar is recognizable by deterministic pushdown automaton (DPDA)?

I have a following grammar: S -> aSa | bSb | lambda.

And I have to figure out whether the language produced by this grammar is recognizable by DPDA. I can’t find any theoremas about it. Obviously, it’s a context-free language and can be recognized by DPA, but what about DPDA?

Is there a difference between the equivalent automaton of a grammar and an automaton which accepts the language produced by the grammar?

I have been assigned some homework in uni, related to push-down automatons (evaluated via final state, not empty stack) and context-free grammars. I have noticed that questions related to generating push-down automatons from context-free grammars are not always phrased in the same way, with mostly two variations:

  1. For the given grammar G, define a push-down automaton M such that L(M) = L(G)
  2. For the given grammar G, define a push-down automaton M equivalent to grammar G.

In Introduction to Automata Theory Languages and Computation by John Hopcroft, in section 6.3.1 a method to obtain the equivalent push-down automaton for a context-free grammar is defined. Since the book is widely available and very well known, I will not copy the method here.

Of course, it is also possible to figure out the language produced by the grammar and work from there to define an automaton which accepts that language (this would be similar to the first phrasing).

These two method would produce equivalent automatons (i.e. automatons which accept the same language), but I am not sure if there is something that tells them apart.

Is there a theoretical difference between “the push-down automaton equivalent to a context-free grammar” and “the push-down automaton which accepts the language defined by a context-free grammar”?