If you're like me, you've often heard of well-foundedness, but never really bothered to learn what it really meant nor why it's important. If you're like most people, on the other hand, then you've never even heard the word and you couldn't care less. Well, go ahead, skip this post. You'll be back and eager to learn once you try and fail to read part 2.
...see? I told you you'd be back. Since you're presumably only interested in understanding enough about well-foundedness in order to follow part 2, I might as well use the same example: a little expression language for evaluating numerical expressions such as 2 + 3 * 4.
Since there are no looping constructs in this simple language, it's reasonable to expect evaluation to always terminate. It's possible to prove so using traditional induction, say, on the height of the expression tree. Induction arguments are routine in mathematics, and thus the previous statement is usually all that is needed in order to convince another mathematician that the eval function always terminates; but since today's post is about different kinds of induction, I'll flesh out the proof so you can see the difference.
proof: by (traditional) induction on the height of the expression tree. Base case: if the tree has height one, i.e. if it is a leaf, then the expression is a literal number of the form Lit n and the evaluation terminates immediately by returning n. For the induction step, suppose eval e terminates for all expressions e of height strictly less than n. We wish to show that eval also terminates on expressions of height exactly n. There are two cases: expressions of the form Add e1 e2 and expressions of the form Mul e1 e2. In either case, e1 and e2 are subtrees of the entire expression, and thus have a strictly smaller height. By the induction hypothesis, eval e1 and eval e2 both terminate, and thus the expressions eval e1 + eval e2 and eval e1 * eval e2 also terminate (assuming Haskell's numeric primitives do). ☐
There are other versions of the above argument which are equally valid. For example, instead of the expression depth, we could use the expression size, meaning the number of constructors (Lit, Add and Mul) used to construct the expression.
This metric is also smaller for e1 and e2 than for Add e1 e2 and Mul e1 e2, so the induction step will succeed.
The two variants I have discussed so far use numbers (depth, size) to argue termination. The idea is that the base case proves the statement to be true when the number is one, and then the induction step can use this fact to prove that the statement is also true when the number is two. This new fact can in turn be used to prove the statement when the number is three, after which we can handle case four, then case five, and so on. In the end, we'll have a proof for all numbers, meaning that expressions of all sizes and depths have a terminating evaluation.
An alternative view of induction is to begin with a statement with a particular number, say five, about which we are actually interested. The induction step gives us a conditional proof of the statement, which depends on the statement to be true for the number four. But the statement is true for the number four, because it's true for three, because it's true for two, because it's true for one. And the statement is true for one because the base case proves so.
This second view is even a bit more "efficient" than the first, because we don't actually need to decrement so slowly. Add (Lit 2) (Mul (Lit 3) (Lit 4)), of size 5, has a terminating evaluation because Lit 2 and Mul (Lit 3) (Lit 4), of sizes 1 and 3, also terminate; the first by the base case, the second because Lit 3 and Lit 4 terminate. The last two expressions had size 1 and are also proved to terminate by the base case. Notice that we managed not to discuss expressions of size 2 and 4 at all.
Using the more efficient version, we see that the proof for n doesn't need to recur on n-1, but can actually use any m as long as m < n. We can generalize further. Even though induction traditionally uses the less-than relation of natural numbers, it is totally possible to find other suitable relations. Structural induction, for instance, says that the expression e1 is less than expression Add e1 e2 because the latter contains e1 as a sub-expression.
This structural less-than is very different from the numerical less-than. For one thing, the numerical less-than is total, meaning that given two numbers, one of the two is going to be smaller than (or equal to) the other. With structures such as Add (Lit 1) (Lit 2) and Mul (Lit 3) (Lit 4), neither is contained within the other, so the structural less-than isn't total. But while our intuitive, everyday notion of ordering seems to require totality, the notion isn't required by the induction principle.
One notion which is required, however, is (you guessed it) well-foundedness. The property basically states that when recurring on smaller and smaller values, we will hit the base case after finitely many steps. This is required: otherwise, the proof could go on and on and on without the sceptics even being convinced.
Speaking of sceptics: are you convinced that well-foundedness is a useful concept now?
No comments:
Post a Comment