## 1. Introduction

In this tutorial, we’ll talk about context-free languages. In particular, we’ll cover some techniques for proving that a language is (not) context-free.

## 2. What’s a Context-Free Language?

Before answering that question, let’s first remember what formal languages and grammars are.

### 2.1. Languages and Grammars

**A formal language is a set of words we form using a non-empty alphabet of symbols**. For example, is a language over the alphabet .

Every language has its grammar. Informally, a** language’s grammar describes how to produce the words of that language**. Formally, it’s a tuple where:

- is the set of all symbols the grammar uses
- is the alphabet; we call its symbols terminals, while the symbols in are known as non-terminals or variables
- is a unique variable that starts the derivation of all the words
- is the set of production rules

**Production rules derive the words of the language at hand.** Therefore, we can think of them as the rules guiding the transformation of the word from over its transitional forms to its final shape.

For example, these rules generate :

(1)

where is the empty word. Consequently, the derivation of is:

**Depending on the form of the production rules, we differentiate between several grammar and language types.**

### 2.2. Context-Free Languages

If the production rules are of the form:

(2)

we say that the grammar is context-free (CFG). So,** a language is context-free if there’s a CFG generating it**.

The language from the previous section is context-free. Its grammar is:

where is the set of rules (1). Both rules are of the CFG form (2), so is context-free, and so is the language it generates.

## 3. How to Prove That a Language Is Context-Free

A language can be stated implicitly as a general pattern of its words (such as for ). Alternatively, we can get an explicit corpus containing the whole language.

### 3.1. Constructing a Grammar

No matter how our language is stated, **we can prove it’s context-free by constructing a CFG grammar that generates its words.**

That’s sometimes more easily said than done. If the language isn’t complex, we’ll be able to derive the rules by hand (provided it’s context-free). That was the case in the above example.

But, if the language has a complicated structure, we may struggle with designing the corresponding CFG. And it’s not just the rules that will be hard to come by. We’ll also have to decide which non-terminal symbols to use. In such cases, we can apply the techniques for automated grammar learning. They can handle even the partially stated languages, where only a sample of the words is available.

### 3.2. Proving That the Grammar Is Correct

**Devising a CFG isn’t sufficient. We also have to prove two statements: that the grammar generates the language and that all the words of the language can be generated by the grammar.** Proving only the latter shows that the grammar’s language contains the target language, but not that they’re equal.

Let’s first prove that by applying the rules of from Section 2.2. times, we either get a word from or a combination . The latter isn’t a word because it contains a non-terminal symbol. We’ll prove this hypothesis by induction by the number of applied rules.

**In the base case** (), we can apply or . Since and is of the form , we’ve covered the induction base.

**In the step case**, we assume that the induction hypothesis holds for some , so we have and . Does the hypothesis hold for ?

Applying to , we get . Similarly, applying the other rule, we get .

Thus we showed that generated only the words in .

Proving the converse is easier. A word of is of the form . If we apply times and afterwards, we’ll get for any .

### 3.3. Pushdown Automata

**A pushdown automaton (PDA) is a finite state machine equivalent to a context-free grammar.** For any language generated by a CFG, there’s a PDA that recognizes it and vice versa.

Sometimes, it’s easier to construct an automaton than grammar. Informally,** a PDA is a finite state machine that transitions between states while reading a word symbol by symbol from its input and writing symbols onto its stack**. If it reads the whole word and ends in an accepting state with an empty stack, we say that it’s accepted the word it read. **The language a PDA recognizes consists of the words it accepts.**

We won’t cover PDAs in this article, but they’re certainly an alternative to consider.

### 3.4. The Question of Proof

**Failing to construct a CFG or a PDA by hand doesn’t mean that the language isn’t context-free.** The absence of proof isn’t the same as the proof of absence. **A CFG generating our language may exist, but we couldn’t find it.**

If we suspect that we failed at constructing an appropriate grammar because the language isn’t context-free, we can try to prove precisely that: that the language isn’t context-free.

## 4. How to Prove That a Language Isn’t Context-Free

Ogden’s lemma describes a property of all context-free languages. So, if we show that the language at hand doesn’t have that property, we’ll prove that it isn’t context-free.

### 4.1. Ogden’s Lemma

Let’s state the lemma. Let be a context-free language. Then, there exists a natural number such that every word () in which at least positions (indexed symbols) are marked, can be split into five parts:

(3)

so that the following holds:

- contains at least one marked position.
- contains at most marked positions.
- for all .

**The lemma shows a regularity in generating the words of a context-free language. Pumping the selected sub-words ( and ) produces the new words of the language.** **If any word with at least symbols breaks this regularity, the language isn’t context-free.**

### 4.2. The Pumping Lemma for Context-Free Languages

**Usually, we apply a special case of Ogden’s lemma, where all the positions are marked. This case is known as the pumping lemma for context-free languages:**

For every context-free language , there exists a natural number such that every word () can be written as

so that the following holds:

- for all

### 4.3. Example

Let’s prove that the language isn’t context-free.

So, **we first suppose the opposite, that is context-free**. Then, by the pumping lemma, there must exist that fulfills the lemma’s conditions about splitting all the words not shorter than in .

Let’s focus on the word . Applying the lemma, we can write it as so that we have:

- for all

Let’s analyze the last condition. There are two options.

- The sub-word contains only s. So, pumping and increases the number of s in the word, while the numbers of s and s stay the same. Therefore, the new pumped words won’t be in .
- Alternatively, can contain both s and s. By pumping the sub-words and , we increase the number of s, the number of s, or both. At the same time, the number of s stays the same. So, the pumped words don’t belong to in this case too.

**That’s a contradiction because we assumed that was context-free. Hence, the assumption was wrong, and cannot be a context-free language.**

### 4.4. The Question of Proof, Revisited

Just as was the case with the proofs by constructing the corresponding CFGs and PDAs, **failing to prove that a language isn’t context-free doesn’t mean that it is**. If we couldn’t find a word that breaks the pumping property, that doesn’t imply that no such word exists.

That opens the question of what to do if both strategies fail. The best course of action would be to use software tools such as automated learners and theorem provers.

## 5. Conclusion

**In this article, we talked about context-free languages.** We can prove that a language is context-free if we construct a context-free grammar that generates it. Alternatively, we can create a pushdown automaton that recognizes the language. On the other hand, we use Ogden’s lemma and the pumping lemma for context-free languages to prove that a language isn’t context-free.