COMP2111
System Modelling and Design
Term 1, 2024

Tute (Week 3)

Table of Contents

1 Recursive definitions

1.1 …of data

The recursive definitions we studied in Week 3 included recursive definitions of data, such as lists, trees and even natural numbers. In the case of lists, we ended up defining finite lists, of which however there are infinitely many (just as there are infinitely many natural numbers, although each one is finite).

Do you think it would be possible to write a program, one that actually runs, that contained a data type with infinite values? (Infinite lists would be a good example; another would be a program that looked like it calculated the entire (infinite) decimal expansion of \(\pi\).)

This can actually be found in the wild, in programming languages featuring lazy evaluation, where all computation is deferred until the point where its results would be needed.

The most prominent example of such a language is Haskell. (For the purposes of this course, you're not expected to know or learn anything about Haskell). It's possible that we'll later discuss lazy evalution, and how you prove things about infinite structures.

1.2 …of languages

We also defined propositions recursively, informally so in Week 1 and with a bit more rigour in Week 3. Do those definitions allow propositions (so called well formed formulae, or wff's) to be infinitely long? If they don't, would it matter if they did?

No, they do not. The fact that the wff's are finite (in length, or structurally) is why structural induction works over them.

2 Proofs by induction

2.1 Question 1

Comment on this inductive proof that all horses have the same colour.

Base cases: The empty set of horses does not contain two horses of differing colour.

Singleton sets \(\{h\}\) of horses do not contain two horses of differing colour.

Inductive step: Consider any set \(H\) of horses: by appeal to the inductive hypothesis, they all have the same colour \(\cal C\). We now add a different horse \(h\) to the set. Do all horses in the now-bigger set \(H\cup \{h\}\) still have the same colour \(\cal C\)?

Remove some (other) horse \(h'\) from the set \(H \cup \{h\}\): by the inductive hypotheses all horses in the smaller set \((H - \{h'\}) \cup \{h\}\) have the same colour. In particular, this means that the new horse \(h\) has the same colour \(\cal C\) as the ones that are still there. Thus the new 1-horse-bigger set \(H\cup \{h\}\) also contains only horses of colour \(\cal C\).

  • What exactly are we doing induction on?
  • How come we succeeded in proving something so absurd? Are the foundations of mathematics unraveling beneath our feet? Or is there a mistake in the proof?

The induction is (implicitly) on the cardinality of the size of a set of horses \(H\).

There is a mistake in the proof. In particular, the inductive step neglects to consider the possibility that \(|H| = 1\). If so, the proof fails because there is no overlap between \((H - \{h'\}) \cup \{h\}\) and \(H\).

2.2 Question 2

Peano arithmetic is a way of defining the natural numbers from first principles. It starts from the following axioms:

  1. There is a number \({\tt Z}\).
  2. For every number \(n\) there is a successor \({\tt S}\ n\) of \(n\).
  3. \({\tt S}\ n \neq {\tt Z}\) for all \(n\).
  4. If \({\tt S}\ n = {\tt S}\ m\) then \(n = m\).
  5. If some set of numbers \(A\) (i) contains \({\tt Z}\), and (ii) contains \({\tt S}\ n\) if it contains \(n\), then \(A\) contains all numbers.

Here \({\tt Z}\) is the number we call \(0\) in day-to-day mathematics, \({\tt S}\ {\tt Z}\) is \(1\), and so on.

Giuseppe Peano's (inductive) definition of addition in this system was

\begin{array}{lcl} p + {\tt Z} &=& p \\ p + {\tt S}\,q &=& {\tt S}\,(p+q) \end{array}

Can you give a Peano-style definition of multiplication?

\begin{array}{lcl} p \times {\tt Z} &=& {\tt Z} \\ p \times {\tt S}\,q &=& p + (p \times q) \end{array}

2.3 Question 3

Use Peano's Axioms and the definition you gave for multiplication just above to prove associativity of multiplication.

You may assume that addition distributes over multiplication:

\[p{\times}q + p{\times}r \ \ \ = \ \ \ p\times(q{+}r)\]

We prove \((p \times q) \times r = p \times (q \times r)\) by induction on \(r\).

Base case:

\begin{array}{clclp{10em}} & (p{\times}q)\times {\tt Z} \\ = & {\tt Z} & \mbox{definition} \\ = & p\times{\tt Z} & \mbox{definition} \\ = & p\times(q\times{\tt Z}) & \mbox{definition} \end{array}

Inductive case: Our inductive hypothesis is that \[(p \times q) \times r = p \times (q \times r)\] We proceed as follows:

\begin{array}{cll} &(p \times q) \times ({\tt S}\ r) \\ = &p \times q + (p \times q) \times r & \mbox{definition} \\ = &p \times q + p \times (q \times r) & \mbox{inductive hypothesis} \\ = &p \times (q + q \times r) & \mbox{distributivity} \\ = &p \times (q \times ({\tt S}\ r)) & \mbox{definition} \end{array}

That takes care of that. But why did we get to do basic induction here, when the Peano axioms don't mention it?

The answer is that we can use axiom (5). The key observation is that the following logical equivalence holds:

\[\forall n \in \mathbb{N}. P(n) \quad\equiv\quad \mathbb{N} \subseteq \{n \in \mathbb{N} : P(n)\}\]

Notice that the LHS is the kind of statement we can prove using basic induction. The RHS is the kind of statement we may prove using axiom (5)! To see this, just let \(A = \{n \in \mathbb{N} : P(n)\}\), and note that we use \(\mathbb{N}\) here to refer to what we called "all numbers" in the Peano axioms.

To conclude \(\mathbb{N} \subseteq \{n \in \mathbb{N} : P(n)\}\), axiom (5) then requires us to prove

  1. \({\tt Z} \in \{n \in \mathbb{N} : P(n)\}\)
  2. If \(n \in \{n \in \mathbb{N} : P(n)\}\) then \(({\tt S}\ n) \in \{n \in \mathbb{N} : P(n)\}\)

A little simplification will show that these proof obligations are exactly the same as the (B) and (I) cases of structural induction.

For this particular proof, we chose our \(P\) as follows: \[P(n) \quad = \quad ((p \times q) \times n = p \times (q \times n))\]

2.4 Question 4

Use the definition of reverse from the lectures (but written in a functional style)

\begin{array}{lcl} \mathit{rev}~[] &=& [] \\ \mathit{rev}~(x{:}xs) &=& \mathit{rev}~xs~ {+}\!\!{+}~[x] \end{array}

to show that rev(rev xs) = xs.

Hint: You might need a lemma.

We need a lemma that allows us to distribute \(rev\) over concatenation.

  1. We prove that \(\forall ys. rev(xs {+}\!\!{+} ys) = rev\;ys {+}\!\!{+} rev\;xs\) by induction on \(xs\).

    Base case: with justifications elided: \[rev([] {+}\!\!{+} ys) = rev\;ys = rev\;ys {+}\!\!{+} []\]

    Inductive case

    \begin{array}{cll} & rev(x{:}xs {+}\!\!{+} ys) \\ = & rev(x{:}xs {+}\!\!{+} ys) & \mbox{identity} \\ = & rev(x{:}(xs {+}\!\!{+} ys)) & \mbox{definition of ${+}\!\!{+}$} \\ = & rev(xs {+}\!\!{+} ys) {+}\!\!{+} [x] & \mbox{definition of $rev$} \\ = & (rev\;ys {+}\!\!{+} rev\; xs) {+}\!\!{+} [x] & \mbox{induction hypothesis} \\ = & rev\;ys {+}\!\!{+} (rev\; xs {+}\!\!{+} [x]) & \mbox{associativity of ${+}\!\!{+}$} \\ = & rev\;ys {+}\!\!{+} rev(x{:}xs) & \mbox{definition of $rev$} \\ \end{array}
  2. We prove that rev(rev xs) = xs by structural induction on xs.

    Base case:

    \begin{array}{cll} &rev(rev\ []) \\ = &rev []& \mbox{definition} \\ = &[]& \mbox{definition} \end{array}

    Inductive case

    \begin{array}{cll} &rev(rev(x{:}xs)) \\ = &rev(rev\;xs{+}\!\!{+}~[x])& \mbox{definition} \\ = &rev [x] {+}\!\!{+} rev(rev\;xs)& \mbox{lemma 1} \\ = &rev [x] {+}\!\!{+} xs& \mbox{induction hypothesis} \\ = &x{:}xs& \mbox{definition(s)} \end{array}

In the above proofs, we assumed that the associativity and right identity of \({+}\!\!{+}\) were proven elsewhere. They would require their own inductive proofs.

2024-04-19 Fri 10:38

Announcements RSS