## Wednesday, December 09, 2009

### intra-line diff

Introducing linediff, a quick ruby wrapper around diff to highlight changed words instead of changed lines.
#!/usr/bin/ruby
pre, post = $stdin.read.split("\n=======\n") post[-1..-1] = "" class String def bash bash -c #{inspect.gsub("$", "\\$")} end def binspect split("\n").map {|s| "echo #{s.inspect.gsub("$", "\\$")}" }.join(";") end def to_lines gsub("\n", "\\n").gsub(" ", "\n") end def to_lines! gsub!("\n", "\\n") gsub!(" ", "\n") end def to_words gsub("\n", " ").gsub("\\n", "\n") end def to_words! gsub!("\n", " ") gsub!("\\n", "\n") end end pre.to_lines! post.to_lines! len = pre.length+post.length both = "diff -U#{len} <(#{pre.binspect}) <(#{post.binspect})".bash both = both.split("\n")[3..-1] pre = both.select {|s| s[0..0] != "+"}.join("\n") post = both.select {|s| s[0..0] != "-"}.join("\n") pre.gsub!(/^-/, "+") pre.gsub!(/^ /, "") post.gsub!(/^ /, "") pre.to_words! post.to_words! puts "#{pre}\n=======\n#{post}"  I use it mostly to handle git conflicts which, in my current setup, show up more often than they should (I repeatedly sync with both a git and an svn repository, which I should not be doing according to git-svn). Here's an example conflict. Can you spot the difference? <<<<<<< HEAD:thesis.tex Once we are done with binary functions, we turn to unary functions, shifting our focus from the aspect weavers to the aspects themselves. Our goal is to help programmers to build aquariums containing the aspects they need, while minimizing the number of proofs needed to achieve this goal. To this end, we provide several generic proofs that create, extend, and translate aquariums with aspects of specific forms. Note, however, that not every pair of functions that commute can be shown to do so using our proofs only, so the programmers are expected to complete our collection with proofs of their own whenever the aspects they need are not of the specific forms covered by our theorems. \begin{enumerate} \item For every associative and commutative binary function, such as those created using our unordered pair strategy, we can create an aquarium of unary functions corresponding to the elements of the algebraic datatype. \item Given an aquarium containing a function$f$, the aquarium can be extended with function$f^n$, for any natural number$n$. The function$f^n$is the function which repeatedly applies,$n$times, the function$f$. If$f$is invertible, the theorem extends to negative integers. ======= Once we are done with binary functions, we turn to unary functions, shifting our focus from the aspect weavers to the aspects themselves. Our goal is to help programmers to build aquariums containing the aspects they need, while minimizing the number of proofs needed to achieve this goal. To this end, we provide several generic proofs that create, extend, and translate aquariums with aspects of specific forms. Note, however, that not every pair of functions that commute can be shown to do so using our proofs only, so the programmers are expected to complete our collection with proofs of their own whenever the aspects they need are of the specific forms covered by our theorems. \begin{enumerate} \item For every associative and commutative binary function, such as those created using our unordered pair strategy, we can create an aquarium of unary functions corresponding to the elements of the algebraic datatype. \item Given an aquarium containing a function$f$, the aquarium can be extended with function$f^n$, for any natural number$n$. The function$f^n$is the function which repeatedly applies,$n$times in total, the function$f$. If$f$is invertible, the theorem extends to negative integers. >>>>>>> df2a5b6eb97ded25a34412d6ff946e0c513da6f3:thesis.tex \item If$M$is a functor, then applying$M$to all the members of an aquarium transforms the aquarium into a new one, while preserving the pairwise commutativity of its elements. %\item We show how parametricity \cite{free_theorems} ensures that structure-manipulating aspects cannot interfere with element-manipulating aspects, thereby reducing the programmer's remaining burden of proof. \end{enumerate}  When I encounter such a conflict, I select the code between the "<<<<<<< ... >>>>>>>" and type "!linediff" to tell vim to filter the lines through my ruby script. <<<<<<< HEAD:thesis.tex Once we are done with binary functions, we turn to unary functions, shifting our focus from the aspect weavers to the aspects themselves. Our goal is to help programmers to build aquariums containing the aspects they need, while minimizing the number of proofs needed to achieve this goal. To this end, we provide several generic proofs that create, extend, and translate aquariums with aspects of specific forms. Note, however, that not every pair of functions that commute can be shown to do so using our proofs only, so the programmers are expected to complete our collection with proofs of their own whenever the aspects they need are not of the specific forms covered by our theorems. \begin{enumerate} \item For every associative and commutative binary function, such as those created using our unordered pair strategy, we can create an aquarium of unary functions corresponding to the elements of the algebraic datatype. \item Given an aquarium containing a function$f$, the aquarium can be extended with function$f^n$, for any natural number$n$. The function$f^n$is the function which repeatedly applies,$n$times, the function$f$. If$f$is invertible, the theorem extends to negative integers. ======= Once we are done with binary functions, we turn to unary functions, shifting our focus from the aspect weavers to the aspects themselves. Our goal is to help programmers to build aquariums containing the aspects they need, while minimizing the number of proofs needed to achieve this goal. To this end, we provide several generic proofs that create, extend, and translate aquariums with aspects of specific forms. Note, however, that not every pair of functions that commute can be shown to do so using our proofs only, so the programmers are expected to complete our collection with proofs of their own whenever the aspects they need are of the specific forms covered by our theorems. \begin{enumerate} \item For every associative and commutative binary function, such as those created using our unordered pair strategy, we can create an aquarium of unary functions corresponding to the elements of the algebraic datatype. \item Given an aquarium containing a function$f$, the aquarium can be extended with function$f^n$, for any natural number$n$. The function$f^n$is the function which repeatedly applies,$n$times in total, the function$f$. If$f$is invertible, the theorem extends to negative integers. >>>>>>> df2a5b6eb97ded25a34412d6ff946e0c513da6f3:thesis.tex \item If$M$is a functor, then applying$M$to all the members of an aquarium transforms the aquarium into a new one, while preserving the pairwise commutativity of its elements. %\item We show how parametricity \cite{free_theorems} ensures that structure-manipulating aspects cannot interfere with element-manipulating aspects, thereby reducing the programmer's remaining burden of proof. \end{enumerate}  Note that my script detects the "=======" separator and diffs between the lines appearing before and after it. It should be trivial to adapt it to examine different files, like the original diff does. Anyway, the result is that the changed words get prefixed with a "+", which I highlight using vim's ":hlsearch" option. <<<<<<< HEAD:thesis.tex Once we are done with binary functions, we turn to unary functions, shifting our focus from the aspect weavers to the aspects themselves. Our goal is to help programmers to build aquariums containing the aspects they need, while minimizing the number of proofs needed to achieve this goal. To this end, we provide several generic proofs that create, extend, and translate aquariums with aspects of specific forms. Note, however, that not every pair of functions that commute can be shown to do so using our proofs only, so the programmers are expected to complete our collection with proofs of their own whenever the aspects they need are +not of the specific forms covered by our theorems. \begin{enumerate} \item For every associative and commutative binary function, such as those created using our unordered pair strategy, we can create an aquarium of unary functions corresponding to the elements of the algebraic datatype. \item Given an aquarium containing a function$f$, the aquarium can be extended with function$f^n$, for any natural number$n$. The function$f^n$is the function which repeatedly applies,$n$+times, the function$f$. If$f$is invertible, the theorem extends to negative integers. ======= Once we are done with binary functions, we turn to unary functions, shifting our focus from the aspect weavers to the aspects themselves. Our goal is to help programmers to build aquariums containing the aspects they need, while minimizing the number of proofs needed to achieve this goal. To this end, we provide several generic proofs that create, extend, and translate aquariums with aspects of specific forms. Note, however, that not every pair of functions that commute can be shown to do so using our proofs only, so the programmers are expected to complete our collection with proofs of their own whenever the aspects they need are of the specific forms covered by our theorems. \begin{enumerate} \item For every associative and commutative binary function, such as those created using our unordered pair strategy, we can create an aquarium of unary functions corresponding to the elements of the algebraic datatype. \item Given an aquarium containing a function$f$, the aquarium can be extended with function$f^n$, for any natural number$n$. The function$f^n$is the function which repeatedly applies,$n$+times +in +total, the function$f$. If$f$is invertible, the theorem extends to negative integers. >>>>>>> df2a5b6eb97ded25a34412d6ff946e0c513da6f3:thesis.tex \item If$M$is a functor, then applying$M\$ to all the members of an aquarium transforms the aquarium into a new one, while preserving the pairwise commutativity of its elements.
%\item We show how parametricity \cite{free_theorems} ensures that structure-manipulating aspects cannot interfere with element-manipulating aspects, thereby reducing the programmer's remaining burden of proof.
\end{enumerate}

Tada! Now I know which words have changed since my last unfortunate sync-and-fix-conflicts cycle.

## Sunday, September 06, 2009

### Samuel's Really Straightforward Proof of the Parametricity Result, extended (trivially) to dependent types.

[I'll eventually write a version which begins by explaining free theorems, but for now here's a proof for the cognoscenti]

Consider an agda expression, any typed agda expression whose type is a Set1. Here are two examples, including one using dependent types.

id : {a : Set}
→ a
→ a
id x = x

ff : {a : Set}
→ {s : a → a}
→ (F : a → Set)
→ (f : {x : a}
→ F x
→ F (s x))
→ {x : a}
→ (y : F x)
→ F (s (s x))
ff F f y = f (f y)

It is very straightforward to obtain a term, ff₁, which has the same type except that Set has been upgraded to a Set1. In fact, the expression ff already has the desired type, we just need to change its type annotation.

id₁ : {a : Set1}
→ a
→ a
id₁ x = x

ff₁ : {a : Set1}
→ {s : a → a}
→ (F : a → Set1)
→ (f : {x : a}
→ (F x)
→ (F (s x)))
→ {x : a}
→ (y : (F x))
→ (F (s (s x)))
ff₁ F f y = f (f y)

Now, let's make this change a bit more generic. Instead of fixing Set1, let's use an abstract target, Set#. Now, since agda doesn't know that Set# is a member of the Set hierarchy, the validity of the expression (x : a) does not follow from the fact that (a : Set#), as it would for (a : Set) or (a : Set1). For this reason, it is necessary to introduce the abstract type transformer (value# : Set# -> Set1), which inserts Set# into the type hierarchy.

It is very straightforward to obtain a term, ff#, which has the same type as ff except that Set has been upgraded to a Set# and value# has been inserted at the appropriate places. In fact, the expression ff already has the desired type, we again just need to change its type annotation.

id# : {a : Set#}
→ value# a
→ value# a
id# x = x

ff# : {a : Set#}
→ {s : value# a → value# a}
→ (F : value# a → Set#)
→ (f : {x : value# a}
→ value# (F x)
→ value# (F (s x)))
→ {x : value# a}
→ (y : value# (F x))
→ value# (F (s (s x)))
ff# F f y = f (f y)

And now the punchline is that ff# is precisely the proof of the free theorem for ff, using the following instantiations for Set# and result#.

record Set# : Set1 where
field
a  : Set
a' : Set
a~ : a → a' → Set

record value# (a# : Set#) : Set where
open Set# a#
field
x  : a
x' : a'
x~ : a~ x x'

Even more free theorems can be obtained by using other instantiations. Note that in the usual formulation of the parametricity result, the type of the free theorems are usually given in a much longer, record-free form where each field is expanded into an individual parameter. It is tedious, but straightforward to convert between the two representations.

free-id' : {a : Set}
→ {a' : Set}
→ {a~ : a → a' → Set}
→ {x  : a }
→ {x' : a'}
→ a~ x x'
→ a~ (id x) (id x')
free-id' {a} {a'} {a~} {u} {u'} u~ = value#.x~ r# where
a# : Set#
a# = record {a = a; a' = a'; a~ = a~}

u# : value# a#
u# = record {x = u; x' = u'; x~ = u~}

r# : value# a#
r# = u#

free-ff' : {a : Set}
→ {a' : Set}
→ {a~ : a → a'
→ Set}
→ {s : a → a }
→ {s' : a' → a'}
→ {s~ : ∀ {x x'}
→ a~ x x'
→ a~ (s x) (s' x')}
→ {F : a → Set}
→ {F' : a' → Set}
→ (F~ : ∀ {x x'}
→ a~ x x'
→ F x → F' x'
→ Set)
→ {f : ∀ {x}
→ F x
→ F (s x)}
→ {f' : ∀ {x'}
→ F' x'
→ F' (s' x')}
→ (f~ : ∀ {x x' x~}
→ {y : F x }
→ {y' : F' x'}
→ F~ x~ y y'
→ F~ (s~ x~)
(f y) (f' y'))
→ ∀ {u u' u~}
→ {y : F u }
→ {y' : F' u'}
→ F~ u~ y y'
→ F~ (s~ (s~ u~))
(ff F f y) (ff F' f' y')
free-ff' {a} {a'} {a~}
{s} {s'} {s~}
{F} {F'} F~
{f} {f'} f~
{u} {u'} {u~}
{y} {y'} y~
= value#.x~ r# where
a# : Set#
a# = record {a = a; a' = a'; a~ = a~}

s# : value# a# → value# a#
s# x# = let open value# x# in record
{ x = s x
; x' = s' x'
; x~ = s~ x~
}

F# : value# a# → Set#
F# x# = let open value# x# in record
{ a = F x
; a' = F' x'
; a~ = F~ x~
}

f# : {x# : value# a#}
→ value# (F# x#)
→ value# (F# (s# x#))
f# {x#} y# = let open value# y# in record
{ x = f x
; x' = f' x'
; x~ = f~ x~
}

u# : value# a#
u# = record {x = u; x' = u'; x~ = u~}

y# : value# (F# u#)
y# = record {x = y; x' = y'; x~ = y~}

r# : value# (F# (s# (s# u#)))
r# = ff# F# f# y#

## Thursday, February 05, 2009

### An even more careful fox

Firefox goes to great lengths to preserve the user's work. For the rare cases when it fails, read this.

If I browse for some time, opening a bazillion tabs before the computer crashes, firefox will offer to restore those tabs as soon as I start it again.

If I type some text in a <TEXTAREA/> form before the computer crashes, firefox will restore the text.

But if I type some text in a fancy editor such as WikEd, my changes will be lost. That's because the (HTML representation of the) text I've created was generated by a great many javascript function calls, and firefox doesn't know whether it's safe to simply call those functions again. Thankfully, firefox does remember the resulting (HTML representation of the) text! It's hidden in your profile's sessionstore.bak file.

~/.mozilla/firefox/*.default/sessionstore.bak

The recovery process, however, is a little bit more involved. In the case of WikEd, I searched for the title of the Wiki article I was editing, and found an innerHTML:"..." assignment soon after. That string contained all of the (HTML representation of the) text I had lost, but it was encoded using escaped hexadecimal sequences ("\xE9"), plus other javascript and HTML annoyances. I found that the easiest way to recover the pure ASCII text was to copy the innerHTML string into the following template:

<script>
document.writeln("...");
</script>

Then, I loaded the resulting html file into a separate firefox tab: the hexadecimal sequences were interpreted, and the HTML code was rendered. Copy-pasting the HTML into WikEd lost the HTML formatting but keep the ASCII text, which is exactly what I wanted. My changes were recovered! Thank you, firefox.

## Saturday, January 24, 2009

### Rant against charitable organizations (and solution)

I happen to be an highly optimistic person. That's a good thing.

Sometimes, my less optimistic friends think the world is a terrible place to live in. Thankfully, they're wrong, so I can cheer them up by pointing out solutions to the problems they perceive. Given this ability, I think I'd make a great volunteer at one of those teen help hotlines. The problem is, none of them wants my help! Well, none that I could find in my area, anyway.

You see, I never donate to charitable organizations. I think it's dumb to give an organization money so that someone else can help people out and get paid for it. I say, cut the middle man! Let me give you a hand instead.

And you know what they replied? "Great, we do have some volunteer positions available. Your task would be to call people and ask them for donations. Interested?"

They really don't want us concerned citizens to help. Like everybody else, they just want our money.

And of course, being the resourceful optimist that I am, I've got a solution to that. And of course, being the enthusiastic developer that I am, my solution involves building a website.

An ordinary charitable organization would use its website to tell you about its plans for solving the problems of the world, and to request your "help" (that is, your money) for implementing that plan. On my hypothetical website, I would ask you about your plan for solving the problems of the world. And I would show you the plans of the other users, so you can contact them and contribute if you think the plan is worth trying.

Most importantly, the plans would be highly detailed. Where ordinary charitable organizations would merely draw a pie chart showing how they allocate your money, our plans would tell you exactly which actions we want to see performed, and why each action contributes to (a sub-goal which contributes to a sub-goal which contributes to) resolving the issue at hand, say, "ending world hunger".

One of the reasons for using such detailed plans is to convince visitors that our plan will actually succeed. I think there is a tendency to assume that, by default, big organizations know what they are doing, and we can't compete with that bias unless we show that we know what we are doing.

And for those of us who don't know what we're doing, we can always ask for the help of our creative contributors. As a project grows, it should become more and more complete, more and more plausible, and attract more and more contributors. If it doesn't, then it probably wasn't a very good idea to begin with.

The other reason for writing highly detailed plans is to keep track of progress. At the most-detailed level, a plan should consist of concrete actions which individuals can perform. That way, resourceful visitors who are convinced by a plan will be able to contribute by performing the concrete action, and ticking it off as completed.

The idea of empowering the community by replacing big organizations with self-organizing teams of volunteers is not new; I've merely translated existing ideas from the world of software to the world of charity. What I'm promoting, it seems, could be called Open Source Charity. (or "Open Plan", perhaps?)

And what this means, is you don't even have to wait for me to implement the aforementioned website. If you have an idea that could change the world, then just use your favourite issue tracking system, and start organizing issues into a concrete plan. Easy enough, right? The world has plenty of good things as it is, like I've claimed.

Well, have fun making the world a better place. Good luck!

## Friday, January 09, 2009

### well-foundedness, part 2 of 2: code is data. or is it codata?

Whether it is represented as finite strings or as finite trees, code is finite, and thus code is data, not codata. That should settle the question.

haskell code = 2 + 3 * 4 haskell string_data = "2 + 3 * 4" haskell data Exp = Lit Int | Add Exp Exp | Mul Exp Exp haskell tree_data = Add (Lit 2) (Mul (Lit 3) (Lit 4))

Except, of course, that's not really the question I wanted to ask. In my very first post, I warned that manipulating code as raw strings or trees was a bit primitive, and I've since been taught an alternative: higher-order abstract syntax. For the purpose of this post, that basically means I'm representing the code using a function instead of a data-structure.

With data-structures, proofs are usually done using structural induction (see part 1). But when using a function to represent code, what could be its internal structure? What are its substructures?

The clue lies in the exponential notation for function types. But first, remember the familiar multiplicative notation for, well, product types. If (a, b, c) is a value of type A × B × C, then b would be a substructure of type B. That seems reasonable. Now, consider a function λx. f x of type A → B. The exponential notation for this type is BA = B × B × ... × B, with one occurrence of B per inhabitant of A. If these inhabitants are (a0, a1, ...), then an alternate representation of our function could be (f a0, f a1, ...). This expanded form contains exactly the same information as the lambda notation, it's just a lot more verbose, especially when A has infinitely many inhabitants.

Seeing this explicit notation, an immediate conclusion is that for any value a of the appropriate type, the (value corresponding to the) application f a is a substructure of f. Wow! It was totally unexpected for me. That's why I'm writing about it. To share my excitement. But don't get too excited yet: there's a warning coming up.

I have mildly motivated our search for substructures in functions with the relatively exciting perspective of creating a structural induction proof whose decreasing structures include functions. Well, I hope you're not that motivated, because you can't do that. Function application isn't well-founded at all, at least in some languages. Take the identity function, for example. In Haskell, it is possible to apply the identity function to itself. The resulting substructure, obviously, will be the identity function. But that substructure also has the identity function as a substructure, and by continuing to recur in this fashion, we will never reach a base case. The proof will instead go on, forever, in an infinite regress.

Uh? I hope you're surprised! How could structural induction work with all of those other kinds of substructure relations, yet fail with this one? What's so special about functions?

Let's pause to ponder the presumed absurdity of an infinitely-nested structure. Take lists, for example. They are finite because after a finite number of elements, we reach the end of the list. An infinitely-nested list would go on forever, cons after cons, never reaching an end. While this would be absurd in most programming languages, infinite lists are actually commonplace in Haskell... So maybe functions aren't so unique after all!

Functions and infinite lists are examples of codata, and induction is indeed known not to work with those. Expressions of infinite depth do not have a terminating evaluation; the evaluator can't even read the entire input! I hear we're supposed to use "coinduction" instead, but I can't comment on that technique yet. What I can say, however, is that even though Haskell allows data and codata to mix, it's apparently a bad idea, and there are languages with type systems strong enough to prevent it. In these languages, there are two distinct types for lists, but I haven't seen a distinct type for "cofunctions" yet. I suspect that a function must be codata whenever some of its possible return values are.

Well, I bet you've heard enough about well-founded and non-well-founded structures (data and codata) for your entire life. But personally, I think I'm just getting started.

### well-foundedness, part 1 of 2: who cares?

If you're like me, you've often heard of well-foundedness, but never really bothered to learn what it really meant nor why it's important. If you're like most people, on the other hand, then you've never even heard the word and you couldn't care less. Well, go ahead, skip this post. You'll be back and eager to learn once you try and fail to read part 2.

...see? I told you you'd be back. Since you're presumably only interested in understanding enough about well-foundedness in order to follow part 2, I might as well use the same example: a little expression language for evaluating numerical expressions such as 2 + 3 * 4.

haskell data Exp = Lit Int | Add Exp Exp | Mul Exp Exp haskell eval :: Exp -> Int haskell eval (Lit n) = n haskell eval (Add e1 e2) = eval e1 + eval e2 haskell eval (Mul e1 e2) = eval e1 * eval e2 haskell eval (Add (Lit 2) (Mul (Lit 3) (Lit 4)))

Since there are no looping constructs in this simple language, it's reasonable to expect evaluation to always terminate. It's possible to prove so using traditional induction, say, on the height of the expression tree. Induction arguments are routine in mathematics, and thus the previous statement is usually all that is needed in order to convince another mathematician that the eval function always terminates; but since today's post is about different kinds of induction, I'll flesh out the proof so you can see the difference.

proof: by (traditional) induction on the height of the expression tree. Base case: if the tree has height one, i.e. if it is a leaf, then the expression is a literal number of the form Lit n and the evaluation terminates immediately by returning n. For the induction step, suppose eval e terminates for all expressions e of height strictly less than n. We wish to show that eval also terminates on expressions of height exactly n. There are two cases: expressions of the form Add e1 e2 and expressions of the form Mul e1 e2. In either case, e1 and e2 are subtrees of the entire expression, and thus have a strictly smaller height. By the induction hypothesis, eval e1 and eval e2 both terminate, and thus the expressions eval e1 + eval e2 and eval e1 * eval e2 also terminate (assuming Haskell's numeric primitives do). ☐

There are other versions of the above argument which are equally valid. For example, instead of the expression depth, we could use the expression size, meaning the number of constructors (Lit, Add and Mul) used to construct the expression.

haskell size :: Exp -> Int haskell size (Lit n) = 1 haskell size (Add e1 e2) = 1 + size e1 + size e2 haskell size (Mul e1 e2) = 1 + size e1 * size e2

This metric is also smaller for e1 and e2 than for Add e1 e2 and Mul e1 e2, so the induction step will succeed.

The two variants I have discussed so far use numbers (depth, size) to argue termination. The idea is that the base case proves the statement to be true when the number is one, and then the induction step can use this fact to prove that the statement is also true when the number is two. This new fact can in turn be used to prove the statement when the number is three, after which we can handle case four, then case five, and so on. In the end, we'll have a proof for all numbers, meaning that expressions of all sizes and depths have a terminating evaluation.

An alternative view of induction is to begin with a statement with a particular number, say five, about which we are actually interested. The induction step gives us a conditional proof of the statement, which depends on the statement to be true for the number four. But the statement is true for the number four, because it's true for three, because it's true for two, because it's true for one. And the statement is true for one because the base case proves so.

This second view is even a bit more "efficient" than the first, because we don't actually need to decrement so slowly. Add (Lit 2) (Mul (Lit 3) (Lit 4)), of size 5, has a terminating evaluation because Lit 2 and Mul (Lit 3) (Lit 4), of sizes 1 and 3, also terminate; the first by the base case, the second because Lit 3 and Lit 4 terminate. The last two expressions had size 1 and are also proved to terminate by the base case. Notice that we managed not to discuss expressions of size 2 and 4 at all.

Using the more efficient version, we see that the proof for n doesn't need to recur on n-1, but can actually use any m as long as m < n. We can generalize further. Even though induction traditionally uses the less-than relation of natural numbers, it is totally possible to find other suitable relations. Structural induction, for instance, says that the expression e1 is less than expression Add e1 e2 because the latter contains e1 as a sub-expression.

This structural less-than is very different from the numerical less-than. For one thing, the numerical less-than is total, meaning that given two numbers, one of the two is going to be smaller than (or equal to) the other. With structures such as Add (Lit 1) (Lit 2) and Mul (Lit 3) (Lit 4), neither is contained within the other, so the structural less-than isn't total. But while our intuitive, everyday notion of ordering seems to require totality, the notion isn't required by the induction principle.

One notion which is required, however, is (you guessed it) well-foundedness. The property basically states that when recurring on smaller and smaller values, we will hit the base case after finitely many steps. This is required: otherwise, the proof could go on and on and on without the sceptics even being convinced.

Speaking of sceptics: are you convinced that well-foundedness is a useful concept now?