# Propositional Formulae

This is the next entry of my Logic notes, half a year after the last entry. (Previous entries: Inductive Structures, Unique Readability.)

We have laid the groundwork for propositional logic with the last two entries. As one might expect, propositional formulae are a uniquely readable inductive structure (this is why we bothered to define these two concepts).

I am going to assume you have a working idea of what a boolean expression (think computer science) is. For instance, if I give you the expression

(a && b) || !c

you should be able to tell me exactly which triples a, b, c satisfy (i.e. make true) this boolean expression.

The notation for computer science and formal logic differ. Instead of &&, ||, !, we use $\land$, $\lor$, $\lnot$. And we have another symbol: $\implies$ (read as “implies”). Semantically $a \implies b$ is equivalent to $(\lnot a) \lor b$. This meaning is connected to what it does in the rest of mathematics: “$b$ follows from $a$”, meaning that if $a$ is true then $b$ better be true. But it doesn’t matter what happens when $a$ is false (since implication only concerns what happens when $a$ is true). If you work through it, you will quickly realize that implications are satisfied if either $a$ is false or $b$ is true, which is precisely when $(\lnot a) \lor b$ is satisfied.

## Formal treatment

Definition (Propositional Alphabet) A propositional alphabet consists of

• a set of propositional symbols $P$,
• the connectives $\land, \lor, \implies, \lnot$,
• constants $\top, \bot$,
• and parentheses $(\, , \,)$.

For our convenience we will refer to the set of propositional symbols and constants (i.e. $P \cup \{\top, \bot\}$) as $P^*$.

Technically a propositional alphabet consists of $P$ as well as the connectives, constants, and parentheses, but $P$ uniquely identifies it so we will refer to the alphabet just as $P$.

A pedagogical note: we have defined these technical terms “propositional symbols”, “connectives”, etc. They are intentionally evocative, so you may be inclined to search for the differences in function between each of these sets. However, so far we have made no real distinction between these sets (besides that $P$ can be arbitrary while the other sets are fixed). This is about to change.

Definition (Propositional Formulae) The set of propositional formulae of the alphabet $P$, which we will refer to as $X$, is the closure of the inductive structure $(P^*, \{\mathbf \land, \mathbf \lor, \mathbf \implies, \mathbf \lnot\})$, where

• $\mathbf \land$ is a function taking $a, b \in X$ to $(a \land b)$,
• $\mathbf \lor$ is a function taking $a, b \in X$ to $(a \lor b)$,
• $\mathbf \implies$ is a function taking $a, b \in X$ to $(a \implies b)$,
• and $\mathbf \lnot$ is a function taking $a \in X$ to $(\lnot a)$.

Note that the functions $\mathbf \land$, $\mathbf \lor$, and $\mathbf \implies$ behave very similarly. Indeed, with only a syntactic structure, they are virtually indistinguishable. (We define all these symbols so that we may perform distinct operations on them related to their semantics.) For convenience we will let $\star$ be a stand-in for any of $\land, \lor, \implies$ and their associated functions $\mathbf \land, \mathbf \lor, \mathbf \implies$. So we may instead just say “$\star$ is a function taking $a, b \in X$ to $(a \star b)$”.

Pedagogical note: It is important to recall that these propositional formulae are just strings consisting of the characters in the alphabet $P$. Note that we have bolded the logical connectives in the definition as $\mathbf \land$, etc are not the symbols in $P$. They are functions that are heavily related to the symbol $\land$, etc.

Another pedagogical note: The propositional formula $((a \land b) \lor (\lnot c))$ is surely evocative of the boolean formula (a && b) || !c. But unlike the latter, the former doesn’t mean anything yet. (Much later, when we introduce semantics and proof rules, we will be able to attach meaning to these formulae too in an unsurprising way.)

Final note: The parentheses matter!! They establish unique readability, which in turn gives us an unambiguous notion of semantics. (This is something worth proving yourself.)

Because propositional formulae are uniquely readable, we can define functions on $X$ (the set of propositions) by induction. Here is an example.

Definition (Rank)

The rank $r : X \to \mathbb N$ of a proposition is defined inductively as

• $r(p) = 0$ for $p \in P^*$,
• $r((a \star b)) = \max(r(a), r(b)) + 1$ for $a, b \in X$,
• and $r((\lnot a)) = r(a) + 1$ for $a \in X$.

Intuitively, the rank of a formula is a metric for how complicated it is. More precisely, it is the smallest $n$ such that $X_n$ contains $a$, where $X_n$ is defined as it was for the proof of the closure of an inductive structure.

Exercise. Prove that every propositional formula has an even number of parentheses. (Hint: recursively describe a function that returns the number of parentheses in a formula.)

## A bigger picture

This is about the time some may start asking “what on earth is happening here, and why should I care?” (At least I asked myself these questions at this time.) The rules and system we are setting up are beginning to seem somewhat arbitrary, as if we have capriciously chosen the symbols in our propositional alphabet. This is not entirely false, so I will aim to explain at a broad level why this is OK.

One may verily ask, “why these particular symbols in our propositional language?” (For instance, $\implies$ is semantically redundant since it can be expressed with $\land, \lor, \lnot$. Actually because of De Morgan’s, we may remove $\land$ as well.) It turns out that the specifics of our proof system (the propositional language and the rules for manipulating formulae which we will later introduce) are not particularly important.

What is important? We need something expressive enough to eventually describe set theory (spoiler, propositional logic is not enough: we need quantifiers). We will architect a set of “proof rules” with which we can “manipulate” certain formulae into other formulae. Introducing semantics (i.e. actually attaching truth values and meaning to our propositional formulae) will give us a notion of soundness and completeness. In very broad terms, a logical system being sound means that “if you can syntactically prove something, the semantic implication will always hold”. And completeness means “if some semantic implication holds, you can syntactically prove it”. Both are very non-obvious traits of a logical system, and being able to equate syntax and semantics gives us powerful tools to study propositional formulae.

Furthermore one may ask, “will we start exclusively using this logical system to prove things about this logical system?” And the answer is no. There would be problems of circularity (e.g. Godel’s Incompleteness Theorems), and on a more philosophical note, there are always mathematical concepts we will have to take for granted. (Specifically, we cannot “rigorously” define set membership.) So we will have two fairly distinct notions of a proof: one is the standard English proofs that you and I are used to, and another will be the syntactical “proofs” that we will soon begin to develop. The latter happens to be a fantastic model of the former (after all, that is one of the reasons we care about formal logic), but we will be using plain English proofs to develop our formal logical system.

Of course, we are not introducing this logical system just for fun. Eventually we will reach the point where we can more easily study some aspects of plain English proofs by modeling them as syntactical proofs (e.g. soundness/completeness and Godel’s Theorems). This is the main point, and we are ju building up to it.