In computer science a formal grammar is a way to describe a formal language, i.e., a set of finite-length strings over a certain finite alphabet. They are named formal grammars by analogy with the concept of grammar for human languages.
The basic idea behind these grammars is that we generate strings by beginning with a special start symbol and then apply rules that indicate how certain combinations of symbols may be replaced with other combinations of symbols. For example, assume the alphabet consists of 'a' and 'b', the start symbol is 'S' and we have the following rules:
Table of contents |
2 Classes of grammars 3 Terminology |
A formal grammar G consists of the following components:
The language of a formal grammar G = (N, Σ, P, S), denoted as L(G), is defined as all those strings over Σ that can be generated by starting with the start symbol S and then applying the production rules in P until no more nonterminal symbols are present.
It is clear that this grammar defines the language { a^{n}b^{n}c^{n} | n > 0 } where a^{n} denotes a string of n a's.
Formal grammars are identical to Lindenmayer systems (L-systems), except that L-systems are not affected by a distinction between terminals and nonterminals, L-systems have restrictions on the order in which the rules are applied, and L-systems can run forever, generating an infinite sequence of strings. Typically, each string is associated with a set of points in space, and the "output" of the L-system is defined to be the limit of those sets.
Some restricted classes of grammars, and the languages that can be derived with them, have special names and are studied separately. One common classification system for grammars is the Chomsky hierarchy, a set of four types of grammars developed by Noam Chomsky in the 1950s. The difference between these types is that they have increasingly stricter production rules and can express fewer formal languages. Two important types are context-free grammars and regular grammars. The languages that can be described with such a grammar are called context-free languages and regular languages, respectively. Although much less powerful than unrestricted grammars, which can in fact express any language that can be accepted by a Turing machine, these two types of grammars are most often used because parsers for them can be efficiently implemented. For example, for context-free grammars there are well-known algorithms to generate efficient LL parsers and LR parsers.
In context-free grammars, the left hand side of a production rule may only be formed by a single non-terminal symbol. The language defined above is not a context-free language, but for example the language { a^{n}b^{n} | n > 0 } is, as it can be defined by the grammar G2 with N={S}, Σ={a,b}, S the start symbol, and the following production rules:
In regular grammars, the left hand side is again only a single non-terminal symbol, but now the right-hand side is also restricted: It may be nothing, or a single terminal symbol, or a single terminal symbol followed by a non-terminal symbol, but nothing else (sometimes a broader definition is used, one can allow longer strings of terminals or single non-terminals without anything else while still defining the same class of languages).
The language defined above is not regular, but the language { a^{n}b^{m} | m,n > 0 } is, as it can be defined by the grammar G3 with N={S,A,B}, Σ={a,b}, S the start symbol, and the following production rules:
Yet to write