Last time we established the concept of an inductive structure. Before we get to what unique readability, we’ll have to talk a little more about inductive structures.

We will use $\text{dom } f$ to refer to the domain of $f$ in these notes.

Also, from now on, when we say “the closure of $(X, F)$”, we mean the minimal set $Y\supset X$ such that $Y$ is closed under $F$.

## Arity

Before, we were dealing with functions $f$ that possibly had tuples of different sizes in its domain. For instance, maybe $f$ can take $2$ arguments sometimes, and $3$ arguments at other times. For the purposes of an inductive structure, we can decompose such a function into $f_2$ and $f_3$, the subsets of the functions that take $2$ and $3$ functions respectively. This would not alter the closure whatsoever, so for the remainder of these notes we will make that simplification.

If a function $f$ takes $n$ inputs, then we say it is $n$-ary. Alternatively, we say that $n$ is its arity. Both terms are synonymous.

## Induction

Theorem. Let $P$ be some property and $(X, F)$ be an inductive structure. If $P$ is a property such that

• all $a \in X$ satisfy $P$
• for all $f \in F$ and all tuples $(a_0, \ldots, a_{n-1}) \in \text{dom}(f)$, if all $a_i$ satisfy $P$ then $f(a_0, \ldots, a_{n-1})$ does too.

then all elements in the closure of $(X, F)$ satisfy $P$ as well.

This is what a proof by induction looks like. First we prove the base case: all $a \in X$ satisfy $P$. Then we prove the inductive step: applying a function $f$ to a tuple of stuff that satisfies $P$ spits out something that also satisfies $P$.

Consider the inductive structure $(\{0\}, \{+1\})$ that generates the natural numbers. Whenever we do a standard proof by induction, this is the closure we are implicitly using. But hopefully you see just how powerful this theorem is.

Outline of proof. Recall how we constructed sets $X_n$ to show that $(X, F)$ has a closure. Now we can inductively prove that every $a \in X_n$ satisfies $P$.

## Motivating example: propositional logic

These notes will assume a little bit of CS knowledge and no knowledge of propositional logic. If you know the latter, you can probably skim or even entirely skip this section.

So perhaps you are familiar with boolean operators such as

(a && (!b)) || c

Propositional logic works a similar way: we have formulae composed of propositional letters — like $a$, $b$, and $c$ are in the example above, though they can be anything — and operators like &&, ||, and ! that act on some number of propositional letters. However, in classical logic (the form of propositional logic we usually study) we use different symbols to represent the AND, OR, and NOT operators: they are $\land$, $\lor$, and $\lnot$, respectively.

Well, the formulae in a propositional logic can also be represented as a closure of an inductive structure! For example, consider an inductive structure $(\{a, b, c\}, \{\land, \lor\})$ such that $\land(a, b) = (a \land b)$. Note that there is an $\land$ function that returns a string with an $\land$ symbol, and the two are not the same. Furthermore, note that the $($ and $)$ characters are part of the outputted string. We can similarly define $\lor(a, b)$.

Here’s why we need the parentheses: ignore for a second what you know about operator precedence. Consider a formula like $a \land b \lor c$. Should it represent $(a \land b) \lor c$, or should it represent $a \land (b \lor c)$? Another way to think of it is, without the parentheses, how was this propositional formula inductively generated: as $\land(a, \lor(b, c))$, or as $\lor((a \land b), c)$? Thus it is useful for formulae to be generated in one unique way. It is useful for propositional formulae to be uniquely readable (which they are, solely because of the parentheses).

Definition (Unique Readability). An inductive structure $(X, F)$ is uniquely readable if and only if for every $y$ in its closure $Y$, exactly one of the following conditions hold:

• $y \in X$
• there is exactly one $f \in F$ and exactly one tuple $(a_0, \ldots, a_{n-1}) \in \text{ dom } f$. such that $f(a_0, \ldots, a_{n-1}) = y$ and $a_i \in Y$ for $0 \leq i < n$.

Note that the inductive structure $(\{0\}, \{+1\})$ is uniquely readable. Try proving this yourself if you don’t understand why this is true!

This notion of unique readability leads to the concept of defining a function recursively. We will first explore the properties we generally want a definition by recursion to have, then establish a theorem that formalizes all of these concepts.

First, note that a definition by recursion does not always go off without a hitch. Consider some inductive structure $(X, F)$ with $a$, $b$, $c \in X$ and $f$, $g \in F$. Then if $f(a) = c$ and $g(b) = c$, a recursive definition based on $f$ and $g$ might run into issues — what if $f(a) \neq g(b)$? Guaranteeing this is true in general is hard, so instead we want there to be one unique set of inputs that generates this output. In other words, we want injectivity, which comes from unique readability.

Here is our wishlist for a definition by recursion. For our starting set $X$ in $(X, F)$ with closure $Y$, we want to specify:

• a subset of a function $G$ on $X$
• specify how to compute $G(y)$ for $y \in X \setminus Y$ by taking the unique $f$ and unique tuple $(a_0, \ldots, a_{n-1})$, and then defining $G(y)$ using $f$, $a_0$, $\ldots$, $a_{n-1}$, $G(a_0)$, $\ldots$, $G(a_{n-1})$.

More broadly, we can define a function $G$ on a closure $Y$ just by looking at the starting set $X$ and a set of compositions on elements in the starting set $X$ to define $G(y \in Y \setminus X)$.

Of course, for the reasons we have just established, $(X, F)$ should be uniquely readable.

Now we’re ready to formalize this idea.

Theorem (Definition by recursion). Consider a uniquely readable inductive structure $(X, F)$ with closure $Y$. If

• we are given a function $G_0$ with $G_0 \colon X \to Z$
• for every $f \in F$, if $f$ is $n$-ary, we are given a $2n$-ary function $H_f$ where $\text{dom } H_f = Y^n \times Z^n$,

then there is a unique function $G$ such that

• $\text{dom } G = Y$
• for all $x \in X$, we have $G(x) = G_0(x)$
• for all $f \in F$ and all $(a_0, a_1, \ldots, a_{n-1}) \in Y^n$ and $(a_0, a_1, \ldots, a_{n-1}) \in \text{dom } f$, we have $G(f(a_0, \ldots, a_{n-1})) = H_f(a_0, \ldots, a_{n-1}, G(a_0), \ldots, G_{a_{n-1}})$.

This specifies a unique way to extend $G_0$ given an inductive structure and $H_f$ for each $f \in F$.

Outline of proof: Recall the definition of $X_n$ when we showed the existence of a closure. We show by inducting on $n$ that there exists only one possible extension of $G_{n-1}$, whose domain is $X_{n-1}$, onto $G_n$, whose domain is $X_n$.

Here is an example of a definition by recursion on the Fibonacci sequence. Our inductive structure is $(\{0, 1\}, \{f\})$ where $f(n, n+1) = n+2$ for $n\geq 0$, and its domain is only pairs of the form $(n, n+1)$ for $n\geq 0$.

Now we define an initial function $G_0$ such that $G_0(0) = 0$ and $G_0(1) = 1$. Define $H_f(n, n + 1, G(n), G(n + 1)) = G(n) + G(n+1)$, and note $G(n + 2) = G(f(n, n + 1)) = H_f(n, n+1, G(n), G(n+1)) = G(n) + G(n+1)$. Yay! Our idea of unique readability and a definition of recursion makes sense on a simple function like the Fibonacci sequence, even if it is a little convoluted.

Though it is convoluted and we will not usually need to construct recursive functions like this, it is good to know what is happening under the hood. And it is good to understand why the Fibonacci sequence gives you a well-defined, unique value of $f(x)$ for all $x \in \mathbb{N}$. It is because the Fibonacci sequence is equivalent to a function $G$ on a uniquely readable closure $(X, F)$, as we have just shown.