Upon discovering the incredible coolness of monads, I decided that the closely related concept of comonads had to be at least as interesting. Unfortunately, far fewer people seem to be interested in this little underestimated cousin, at least not enough for anyone to post a spacesuit metaphor for them. But as always, when proper documentation is lacking, it suffices to ask the source.
Okay, so this time the source didn't help much, since there are just types and no code. That's because monads and comonads are abstract classes (although Haskell probably uses a different terminology), whose implementation details are left to subclasses.
Yet types alone can say a lot. If I gave you a mysterious function whose type was (a -> a) and which worked for any possible type a, it would be pretty clear that I gave you the identity function. If I gave you a mysterious function whose type was (a -> (a -> b) -> b) and which worked for any possible types a and b, it would be pretty clear that I gave you a function which applies its second argument to the first. And that's exactly the kind of behaviours which the mysterious return, extract, (>>=), and (=>>) must exhibit, except they must deal with the extra m and w stuff as well.
Let's pause to ponder the apparent futility of a function which applies its second argument to the first. It is exactly like function application, but its arguments are in the opposite order. Why would anyone work backwards like this? Many people, in fact.
So "binding a value to a function" is just functional-programming jargon for the standard object-oriented way of chaining computations together. What is incredible with monads and comonads is that each different m and w provides a slightly different behaviour for this dot operator. Who knew we could need more than one?
A concrete monad or comonad implementation of a binding arrow, thus, simply applies its second argument to the first, and deals with the extra m or w stuff as well. Monad and comonad users must also deal with the extra m and w stuff, because they are the ones writing the (a -> m b) and (w a -> b) functions which (>>=) and (=>>) expect. Fortunately for them, return and extract can be used to lift their ordinary (a -> b) functions into a suitable form.
The big difference between using monads, comonads, or the ordinary dot operator is the kind of non-lifted functions which can be used. For the ordinary dot operator there are none, for monads there are functions which can stuff something extra along the return value, and for comonads there are functions which can extract something extra out of the input value.
A spacesuit example should follow soon.
9 comments:
Hmm, surely there has to be more to say than this. What is an example of something one would use a Comonad for?
Comonads are good conversation openers, and with enough practice they can be a great party trick.
Seriously, I haven't had the occasion to put my new understanding to use yet, and I don't think it will happen soon.
I do have a contrived example, though. With an Array comonad, you could run a computation over several values in parallel. At a few key points in the computation, one of those "special functions extracting something something extra out of the input value" could be used to get a glimpse of what the other threads have computed so far.
You should look into zippers for interesting applications of comonads.
OMG I was linked by a long time blog hero of mine, sigfpe! I'm glad somebody decided to clarify my example, I had no clue I was being cryptic.
Hello, fellow sigfpe fans!
The literate haskell on this page renders incorrectly in Opera 9.27.
I'm not sure I understand Comonads... Am I to consider Monads for SISD computations while Comonads are for SIMD? Monad transformers would then be for MISD computations while MIMD is handled by Comonad transformers?
My brow is very furrowed.
Anonymous brow,
monads and comonads are very generic tools, my Array example and sigfpe's similar Pointer example are only one way to use them.
In the world of Array-like type constructors, special monadic operations can return an array of values, whereas special comonadic operations can peek at an array of values. It just happens that for Arrays, a natural implementation for the monadic and comonadic binding arrows is to run the computation once for each value. This doesn't necessarily mean they will run in parallel; that part is up to your compiler.
I'm afraid I don't know about monad transformers yet, so I can't comment on the last half of your comment.
By the way, the CSS should be fixed in Opera now.
Many years later, I finally understand Monad transformers and Comonad transformers well enough to give a proper answer to all of the above.
First of all, T.Q.: Comonads are good for executing cellular automaton, because they execute a computation uniformly at every cell, with each operation expressed relative to the current cell.
Second, anonymous "zipper" guy: yes, zippers are a convenient way of representing which cell is the current cell, but you can also use an index. This is what I did here, in my comonadic implementation of Conway's game of life. If even demonstrates comonad transformers!
Third, anonymous "furrowed brow" guy: no, monad transformers are not for MISD. A monad transformer simply adds more special monadic operations (think get, put, print, or fail) to the current list of available operations. You still only execute one instruction at a time. You might be right about comonads being SIMD, though, since each operation is always applied to all the cells.
In short, monads offer special operations, and monad transformers combine monads so that you can use more kinds of special operations in the same computation.
Comonads only offer special operations in so far as they define the environment in which a cellular automaton can run, so they offer operations to query the local topology and the values of the local neighbors.
Finally, comonad transformers combine comonads, so that you have one local topology along one orthogonal axis and another local topology along another. This again gives you more operations, because you have more local neighbors to query.
I found the zipper-based cellular automaton example to be a nice introduction to comonads too, but it's important to keep the examples separate from the abstract idea. If we don't, we risk missing some aspects and assuming others; for example, I've seen "monad tutorials" which just describe Maybe, without mentioning monads (or functors, map, join, return, bind, etc.) at all!
I like Conal Elliott's description at http://conal.net/blog/posts/sequences-streams-and-segments
Here are some choice quotes:
"The most helpful intuitive description I’ve found is that comonads describe values in context."
"Monadic values are typically produced in effectful computations: a -> m b
Comonadic values are typically consumed in context-sensitive computations: w a -> b"
The update rule of a cellular automaton is an excellent example of a 'context-sensitive computation'. We *could* implement our update rule as "Neighbourhood -> Cell", but it's not at all clear how such a rule would be iterated, which is the point of a CA. If we used "InNeighbourhood Cell -> Cell" then it's clearer: we just need to wrap our result in a neighbourhood.
To use more concrete types, we can go from the less-descriptive "[Cell] -> Cell" (how do we know which Cell this acts on? Does it handle the [] case?) to the more descriptive "([Cell], Cell, [Cell]) -> Cell" (oh, we're operate on that middle one!).
Post a Comment