Saturday, January 24, 2009

Rant against charitable organizations (and solution)

I happen to be an highly optimistic person. That's a good thing.

Sometimes, my less optimistic friends think the world is a terrible place to live in. Thankfully, they're wrong, so I can cheer them up by pointing out solutions to the problems they perceive. Given this ability, I think I'd make a great volunteer at one of those teen help hotlines. The problem is, none of them wants my help! Well, none that I could find in my area, anyway.

You see, I never donate to charitable organizations. I think it's dumb to give an organization money so that someone else can help people out and get paid for it. I say, cut the middle man! Let me give you a hand instead.

And you know what they replied? "Great, we do have some volunteer positions available. Your task would be to call people and ask them for donations. Interested?"

They really don't want us concerned citizens to help. Like everybody else, they just want our money.


And of course, being the resourceful optimist that I am, I've got a solution to that. And of course, being the enthusiastic developer that I am, my solution involves building a website.

An ordinary charitable organization would use its website to tell you about its plans for solving the problems of the world, and to request your "help" (that is, your money) for implementing that plan. On my hypothetical website, I would ask you about your plan for solving the problems of the world. And I would show you the plans of the other users, so you can contact them and contribute if you think the plan is worth trying.

Most importantly, the plans would be highly detailed. Where ordinary charitable organizations would merely draw a pie chart showing how they allocate your money, our plans would tell you exactly which actions we want to see performed, and why each action contributes to (a sub-goal which contributes to a sub-goal which contributes to) resolving the issue at hand, say, "ending world hunger".

One of the reasons for using such detailed plans is to convince visitors that our plan will actually succeed. I think there is a tendency to assume that, by default, big organizations know what they are doing, and we can't compete with that bias unless we show that we know what we are doing.

And for those of us who don't know what we're doing, we can always ask for the help of our creative contributors. As a project grows, it should become more and more complete, more and more plausible, and attract more and more contributors. If it doesn't, then it probably wasn't a very good idea to begin with.

The other reason for writing highly detailed plans is to keep track of progress. At the most-detailed level, a plan should consist of concrete actions which individuals can perform. That way, resourceful visitors who are convinced by a plan will be able to contribute by performing the concrete action, and ticking it off as completed.


The idea of empowering the community by replacing big organizations with self-organizing teams of volunteers is not new; I've merely translated existing ideas from the world of software to the world of charity. What I'm promoting, it seems, could be called Open Source Charity. (or "Open Plan", perhaps?)

And what this means, is you don't even have to wait for me to implement the aforementioned website. If you have an idea that could change the world, then just use your favourite issue tracking system, and start organizing issues into a concrete plan. Easy enough, right? The world has plenty of good things as it is, like I've claimed.


Well, have fun making the world a better place. Good luck!

Friday, January 09, 2009

well-foundedness, part 2 of 2: code is data. or is it codata?

Whether it is represented as finite strings or as finite trees, code is finite, and thus code is data, not codata. That should settle the question.

haskell code = 2 + 3 * 4 haskell string_data = "2 + 3 * 4" haskell data Exp = Lit Int | Add Exp Exp | Mul Exp Exp haskell tree_data = Add (Lit 2) (Mul (Lit 3) (Lit 4))

Except, of course, that's not really the question I wanted to ask. In my very first post, I warned that manipulating code as raw strings or trees was a bit primitive, and I've since been taught an alternative: higher-order abstract syntax. For the purpose of this post, that basically means I'm representing the code using a function instead of a data-structure.


With data-structures, proofs are usually done using structural induction (see part 1). But when using a function to represent code, what could be its internal structure? What are its substructures?

The clue lies in the exponential notation for function types. But first, remember the familiar multiplicative notation for, well, product types. If (a, b, c) is a value of type A × B × C, then b would be a substructure of type B. That seems reasonable. Now, consider a function λx. f x of type A → B. The exponential notation for this type is BA = B × B × ... × B, with one occurrence of B per inhabitant of A. If these inhabitants are (a0, a1, ...), then an alternate representation of our function could be (f a0, f a1, ...). This expanded form contains exactly the same information as the lambda notation, it's just a lot more verbose, especially when A has infinitely many inhabitants.

Seeing this explicit notation, an immediate conclusion is that for any value a of the appropriate type, the (value corresponding to the) application f a is a substructure of f. Wow! It was totally unexpected for me. That's why I'm writing about it. To share my excitement. But don't get too excited yet: there's a warning coming up.


I have mildly motivated our search for substructures in functions with the relatively exciting perspective of creating a structural induction proof whose decreasing structures include functions. Well, I hope you're not that motivated, because you can't do that. Function application isn't well-founded at all, at least in some languages. Take the identity function, for example. In Haskell, it is possible to apply the identity function to itself. The resulting substructure, obviously, will be the identity function. But that substructure also has the identity function as a substructure, and by continuing to recur in this fashion, we will never reach a base case. The proof will instead go on, forever, in an infinite regress.

Uh? I hope you're surprised! How could structural induction work with all of those other kinds of substructure relations, yet fail with this one? What's so special about functions?

Let's pause to ponder the presumed absurdity of an infinitely-nested structure. Take lists, for example. They are finite because after a finite number of elements, we reach the end of the list. An infinitely-nested list would go on forever, cons after cons, never reaching an end. While this would be absurd in most programming languages, infinite lists are actually commonplace in Haskell... So maybe functions aren't so unique after all!

Functions and infinite lists are examples of codata, and induction is indeed known not to work with those. Expressions of infinite depth do not have a terminating evaluation; the evaluator can't even read the entire input! I hear we're supposed to use "coinduction" instead, but I can't comment on that technique yet. What I can say, however, is that even though Haskell allows data and codata to mix, it's apparently a bad idea, and there are languages with type systems strong enough to prevent it. In these languages, there are two distinct types for lists, but I haven't seen a distinct type for "cofunctions" yet. I suspect that a function must be codata whenever some of its possible return values are.

Well, I bet you've heard enough about well-founded and non-well-founded structures (data and codata) for your entire life. But personally, I think I'm just getting started.

well-foundedness, part 1 of 2: who cares?

If you're like me, you've often heard of well-foundedness, but never really bothered to learn what it really meant nor why it's important. If you're like most people, on the other hand, then you've never even heard the word and you couldn't care less. Well, go ahead, skip this post. You'll be back and eager to learn once you try and fail to read part 2.





...see? I told you you'd be back. Since you're presumably only interested in understanding enough about well-foundedness in order to follow part 2, I might as well use the same example: a little expression language for evaluating numerical expressions such as 2 + 3 * 4.

haskell data Exp = Lit Int | Add Exp Exp | Mul Exp Exp haskell eval :: Exp -> Int haskell eval (Lit n) = n haskell eval (Add e1 e2) = eval e1 + eval e2 haskell eval (Mul e1 e2) = eval e1 * eval e2 haskell eval (Add (Lit 2) (Mul (Lit 3) (Lit 4)))

Since there are no looping constructs in this simple language, it's reasonable to expect evaluation to always terminate. It's possible to prove so using traditional induction, say, on the height of the expression tree. Induction arguments are routine in mathematics, and thus the previous statement is usually all that is needed in order to convince another mathematician that the eval function always terminates; but since today's post is about different kinds of induction, I'll flesh out the proof so you can see the difference.

proof: by (traditional) induction on the height of the expression tree. Base case: if the tree has height one, i.e. if it is a leaf, then the expression is a literal number of the form Lit n and the evaluation terminates immediately by returning n. For the induction step, suppose eval e terminates for all expressions e of height strictly less than n. We wish to show that eval also terminates on expressions of height exactly n. There are two cases: expressions of the form Add e1 e2 and expressions of the form Mul e1 e2. In either case, e1 and e2 are subtrees of the entire expression, and thus have a strictly smaller height. By the induction hypothesis, eval e1 and eval e2 both terminate, and thus the expressions eval e1 + eval e2 and eval e1 * eval e2 also terminate (assuming Haskell's numeric primitives do). ☐

There are other versions of the above argument which are equally valid. For example, instead of the expression depth, we could use the expression size, meaning the number of constructors (Lit, Add and Mul) used to construct the expression.

haskell size :: Exp -> Int haskell size (Lit n) = 1 haskell size (Add e1 e2) = 1 + size e1 + size e2 haskell size (Mul e1 e2) = 1 + size e1 * size e2

This metric is also smaller for e1 and e2 than for Add e1 e2 and Mul e1 e2, so the induction step will succeed.


The two variants I have discussed so far use numbers (depth, size) to argue termination. The idea is that the base case proves the statement to be true when the number is one, and then the induction step can use this fact to prove that the statement is also true when the number is two. This new fact can in turn be used to prove the statement when the number is three, after which we can handle case four, then case five, and so on. In the end, we'll have a proof for all numbers, meaning that expressions of all sizes and depths have a terminating evaluation.

An alternative view of induction is to begin with a statement with a particular number, say five, about which we are actually interested. The induction step gives us a conditional proof of the statement, which depends on the statement to be true for the number four. But the statement is true for the number four, because it's true for three, because it's true for two, because it's true for one. And the statement is true for one because the base case proves so.

This second view is even a bit more "efficient" than the first, because we don't actually need to decrement so slowly. Add (Lit 2) (Mul (Lit 3) (Lit 4)), of size 5, has a terminating evaluation because Lit 2 and Mul (Lit 3) (Lit 4), of sizes 1 and 3, also terminate; the first by the base case, the second because Lit 3 and Lit 4 terminate. The last two expressions had size 1 and are also proved to terminate by the base case. Notice that we managed not to discuss expressions of size 2 and 4 at all.

Using the more efficient version, we see that the proof for n doesn't need to recur on n-1, but can actually use any m as long as m < n. We can generalize further. Even though induction traditionally uses the less-than relation of natural numbers, it is totally possible to find other suitable relations. Structural induction, for instance, says that the expression e1 is less than expression Add e1 e2 because the latter contains e1 as a sub-expression.

This structural less-than is very different from the numerical less-than. For one thing, the numerical less-than is total, meaning that given two numbers, one of the two is going to be smaller than (or equal to) the other. With structures such as Add (Lit 1) (Lit 2) and Mul (Lit 3) (Lit 4), neither is contained within the other, so the structural less-than isn't total. But while our intuitive, everyday notion of ordering seems to require totality, the notion isn't required by the induction principle.

One notion which is required, however, is (you guessed it) well-foundedness. The property basically states that when recurring on smaller and smaller values, we will hit the base case after finitely many steps. This is required: otherwise, the proof could go on and on and on without the sceptics even being convinced.

Speaking of sceptics: are you convinced that well-foundedness is a useful concept now?