# Is there any good method to find if a grammar is optimal for a problem? I’ve been thinking about grammatical evolution problems and how the grammar influences the algorithm performance. It came to my mind the huge impact that the grammar that you’re using has in the time that takes an algorithm to reach an optimum solution.

The simplest example would be if your problem doesn’t involve trigonometric operations. If you’re trying to find `f(x) = 3x - 1/2`, including sins, tangents or square roots in your grammar will, almost certainly, slowen your algorithm as the population complexity will grow. Other not-so-evident simplifications for a grammar would be trigonometric identities:

``tan(x) = sen(x) / cos(x) ``

Talking about this last example, I don’t know how to determine the importance of the impact of including `tan(x)` between the grammar rules to produce valid solutions. Or in other words, knowing if adding `tan(x)` will be better in terms of performance than don’t doing it and thus, forcing the evolution to combine two or more operators and terminals to being able to use that operation and making the grammar ambiguous.

So this two are the questions:

1. Is there any way of knowing if a grammar is optimal for finding a solution?
2. Which evolutionary algorithm or machine learning method (considering that I’m almost profane in this discipline, some explanation is wellcome) would you use for finding optimal or sub-optimal grammars?

Thanks 