Gabriel Gonzalez, the author of the well-known pipes package, has written a blog post in which he claims that comonads are objects. I think he is mistaken, but I can't blame him; I used to be quite confused about comonads myself.
Comonads are not objects
To motivate the purported equivalence between objects and comonads, Gabriel gives detailed implementations of three classic object-oriented patterns (Builder, Iterator, and Command). Then, he reveals that all three implementations are comonadic. His examples are contrived, but more importantly, they are also misleading! Although not a direct quote from his post, he clearly intends the Haskell session
>>> t1 = thermostat 3
>>> t2 = t1 # up'
>>> t3 = t2 # up'
>>> toString t3
"5 Kelvin"
to be equivalent to the following Java session.
>>> t = new Thermostat(3);
>>> t.up();
>>> t.up();
>>> t.toString();
"5 Kelvin"
With this particular sequence of methods, the two sessions indeed compute the same result, but this is only because the up and down methods commute with each other. By adding the method square to the list, the behaviours of the two sessions quickly diverge. Strangely enough, in the Haskell version, each new method call gets applied before all the previous calls!
Haskell version (inverted method call order)
>>> t1 = thermostat 3
>>> t2 = t1 # up' -- 3+1
>>> t3 = t2 # up' -- 3+1+1
>>> toString t3
"5 Kelvin"
>>> t4 = t3 # square' -- 3*3+1+1
>>> toString t4
"11 Kelvin"
>>> t5 = t4 # up' -- (3+1)*(3+1)+1+1
>>> toString t5
"18 Kelvin"
Java version (normal method call order)
>>> t = new Thermostat(3);
>>> t.up(); // 3+1
>>> t.up(); // 3+1+1
>>> t.toString();
"5 Kelvin"
>>> t.square(); // (3+1+1)*(3+1+1)
>>> t.toString();
"25 Kelvin"
>>> t.up(); // (3+1+1)*(3+1+1)+1
>>> t.toString();
"26 Kelvin"
I hope this counter-example convinces you that comonads are not objects. But if that is true, what are comonads, then? What use do we have for methods which get applied in reverse chronological order?
Comonads are neighbourhoods
The answer, I believe, is that a comonadic computation is similar to a cellular automaton. At each step, the computation for a cell computes its next value based on the value of its neighbours at the previous step. My interpretation of
extract :: w a → a
is that w a is a neighbourhood containing a number of values of type a, at various locations around the current cell. A given comonad may provide functions to inspect, say, the neighbour immediately to the left of the current cell, or the neighbour from one timestep ago, but the bare minimum is that it must be possible to extract the value of the current cell itself.
Similarly, my interpretation of
extend :: (w a → b) → w a → w b
is that w a → b computes one step of the cellular automaton by examining a local neighbourhood and producing the next value for the current cell. This single-cell computation is then extended to the entire neighbourhood, updating each cell as if it was the current one.
Comonadic thermostat
In our thermostat example, there is one cell for each possible temperature, and the value of each cell is also a temperature.
So far, I have stuck to Kelvin degrees in order to minimize the number of moving parts, but now that we have two completely different uses for temperatures, let's follow Gabriel by using Celsius for the cell contents, and Kelvin for the cell labels.
|
Since temperatures are floating point values, our thermostat operates on a neighbourhood which is continuous instead of discrete. Continuousness is also the premise of the above Game of Life variant. Unlike comonadic computations, which apply their operations one at a time, the above cellular automaton also operates on a continuous time domain. I encourage you to watch the video, it looks awesome. |
If our thermostat had an object-oriented implementation, each operation would simply modify the current cell by applying the operation to the Celsius temperature it contains. In fact, if this was an object-oriented implementation, we would only need one cell! Since this is, instead, a comonadic implementation, the operation is instead applied to the Kelvin temperature which labels the current cell. The resulting Kelvin temperature is interpreted as a pointer to another cell, and the final result is the Celsius contents of that cell.
When the available operations are restricted to just up and down, this scheme works just fine. Originally, each cell contains the Celsius equivalent of their Kelvin label. Then, each time the up method is applied, each cell takes on the Celsius value of the cell one unit above it, which effectively increases the temperature by one everywhere. But this only works because up and down preserve the invariant that cells which are one Kelvin unit apart contain values which are one Celsius unit apart!
One way to visualize this is to imagine a sheet of graph paper annotated with Celsius temperatures, over which we overlay a transparent sheet of graph paper annotated with Kelvin temperatures. One of the transparent cells is our current cell, and we can read its Celsius contents through the transparent paper. When we call up and down, we offset the two sheets of paper from each other, causing the apparent contents of all the cells to increase or decrease simultaneously.
Reverse chronological order
Similarly, if each cell contains its own temperature and we apply the square method, each cell squares its Kelvin temperature, looks up the corresponding cell, and ends up with the (Celsius representation of the) square of its original Kelvin value.
But if we apply up after the temperatures have already been squared, the values will change as if the values had been incremented before they were squared! How is that even possible? What is the secret of this reverse-chronological method application?
If there was only one cell, anti-chronological application would not be possible because the original, unsquared value x would been lost by the time the up operation was applied. Our comonadic implementation, however, has a lot of cells: in particular, one unit above the current cell, there is a cell whose unsquared value was x+1. Since the square operation affects all cells uniformly, that cell now contains the squared value (x+1)2. Therefore, it is no surprise that when the up operation offsets the two sheets of paper by one unit, the old x2 value scrolls out of view and gets replaced by (x+1)2 instead.
Ironically, Gabriel's original implementation of up' was applying its operation in the correct chronological order.
>>> up' (t, f) = (t + 1, f)
But later on, while trying to shoehorn his object into the comonadic mold, he proved that the correct definition should have been as follows.
>>> up' (t, f) = (t, \t' → f (t' + 1))
While that is indeed a correct comonadic implementation, this is not a correct object-oriented implementation because it applies its operation in reverse chronological order. In particular, notice that it modifies the input of f; since f applies all the other operations which have been accumulated so far, applying the new operation to its input has the effect of inserting that operation before all the others.
Visiting the neighbours
Why do comonads apply their operations in reverse chronological order? Is that ever useful?
In fact, this reversed order business is not the full story. In a typical comonad, the cell labels and their contents would not both be temperatures. Instead, if we were trying to implement a one-dimentional cellular automaton such as rule 30, we would label the cells with integers and their contents with colors.
Now that the input and output types are distinct, we can see that \t' → f (t' + 1) is not even attempting to modify the contents of the current cell, neither chrono- nor anti-chronologically. Instead, it is modifying the index of the cell on which f applies its computations, thereby allowing us to observe which color has been computed for a neighbouring cell. This is, of course, the first step to compute the color which the current cell will take on the next step.
Once we have wrapped the steps necessary to compute our next color into a function of type w Color → Color, we can extend this function to all the cells in our simulation, modifying the entire row at once by modifying f.
Objects are monads
We have seen that comonads are not objects, but are instead neighbourhoods in which cellular automata can evolve. But that is only half of the question. If comonads are not objects, then what are objects? Is there another way to represent objects in Haskell?
There are many facets to modern object-oriented programming, but the aspect which I think Gabriel was aiming for in his post is objects as encapsulated data plus a bunch of methods to modify this data. To me, those features don't sound comonadic at all; rather, I would implement them using a State monad!
>>> type Thermostat a = State Double a
>>>
>>> getKelvin = get
>>> getCelsius = fmap (- 273.15) getKelvin
>>>
>>> toString = do c ← getCelsius
>>> return (show c ++ " Celsius")
>>>
>>> up = modify (+1)
>>> down = modify (-1)
Since we're using monads, the syntax for calling a few of those methods one after the other is just do-notation, which is even shorter than the this # notation advocated by Gabriel.
>>> up3 :: Thermostat String
>>> up3 = do up
>>> up
>>> up
>>> toString
Notice the type of the above code: Thermostat String doesn't mean that there are many different kinds of thermostats, and that this particular code is using or producing a "string thermostat". Rather, the above code is an object-oriented computation having access to a single Thermostat object and producing a single String value.
Okay, so using one monad, we managed to represent a computation manipulating one object. How would we manipulate two thermostats? Using two monads? Yes! Using more than one monad at once is precisely what monad transformers are about.
>>> type ThermostatT m a = StateT Double m a
>>> obj1 = id
>>> obj2 = lift
>>>
>>> up_both :: ThermostatT Thermostat String
>>> up_both = do obj1 up
>>> obj2 up
>>> s1 ← obj1 toString
>>> s2 ← obj2 toString
>>> return (s1 ++ " and " ++ s1)
This time we have an object-oriented computation having access to two Thermostat objects, again producing a String value.
Duality
Since monads and comonads are duals, we would expect comonads to also have transformers. To figure out what comonad transformers are, let's spell out the similarities and differences of this duality.
First, monads are computations. Comonads are also computations, but of a different kind. The goal of both kinds of computations is to produce a value, but their means to obtain it are different. The main difference is that monads can cause side-effects, which are visible at the end of the computation, while comonads can observe neighbours, which need to be given before the computation can begin.
One of the characteristics of monadic-style computation is that there is a distinguished statement,
return, which produces a given value without causing any side-effects. The equivalent characteristic for comonadic-style computation is that there is a distinguished observer,
extract, which returns the current value without observing any neighbour.
In addition to this distinguished statement, which all monads share, each monad instance typically offers a number of monad-specific statements, each producing their own special side-effects. Correspondingly, each comonad instance typically provides a number of observers, each observing their own special kinds of neighbours. You can also observe the shape of the neighbourhood; for example, if your automaton runs on a finite grid, you can check whether the current cell lies on the boundary of the grid.
With monad transformers, it is possible to construct a combined monad in which the special statements of both component monads may be used. And now we have it: with comonad transformers, it must be possible to construct a combined comonad in which the special observers of both component comonads may be used.
For example, if we have a 1D row of cells, that is, a comonad in which left and right neighbours are available, and another 1D comonad in which up and down neighbours are available, then the combined comonad is the cartesian product of its components: a 2D grid in which all four direct neighbours are available.
If you are still curious about comonads, comonad transformers, and the above 2D grid trick, more details are available in my Haskell implementation of Conway's Game of Life.