Sunday, December 21, 2014

The "99 Bottles of Beers" of Type Systems

"Hello World" is a good first example program because it is small, but also because it encourages the reader to get into the driver's seat and take control of the program. Copy-paste the "Hello World" listing from a website, and you're just blindly following instructions. Change it to print "Hello Mom", and you're boldly taking your first step towards the unknown, into a world where it is now you who is giving the instructions.

New programmers need to take that step, because programming anything non-trivial requires taking your own decisions about how things are implemented. If your boss was taking all the decisions for you, you wouldn't be a programmer, you'd be a typist.

The "Hello World" of Type Systems

Once you become an experienced programmer, "Hello World" examples are still useful as an introduction to new languages and new systems. Once you have a working base, you can experiment by making small changes and verifying whether they work as you expect, or if you need to read more tutorials.

For type systems, I guess a "Hello World" program would be a small datatype/structure/class containing a few simple fields. The standard here isn't as well-established as with "Hello World", but describing a person is quite common:

data Person = Person
  { name :: String
  , age :: Int
  }

I've used Haskell here, but regardless of the language in which I would have written this, you could still easily infer how to add a field for the person's height.

99 Bottles of Beer

A little down the road, another introductory program is "99 bottles of beer on a wall". This one teaches budding programmers another important lesson: it's possible to write a program which prints out more text than what you've written in its source code. More specifically, the program shows how to use a variable to abstract over the part of the text which varies from one iteration to the other, and how to use a loop to determine how many iterations to make and which value the variable should take in each one.

For type systems, a "99 bottles of beer" program would teach the same lesson: it's possible to write a program which uses larger types than those you've written in the source code. This is rarely needed, but it's possible! Even in a large, complicated application, you might have a manager of pools of worker threads processing lists of person values, but Manager (Pool (WorkerThread (List Person))) is still a fixed type which you write down explicitly in your program. It's as if you had abstracted out the number of beers to print, but then wrote explicit calls with n = 99, n = 98 and so on, instead of using a loop to generate the calls at runtime. Our "99 bottles of beer" example should generate types at runtime.

The "99 Bottles of Beer" of Type Systems

The simplest such example I could think of is as follows:

  1. Parse a non-negative integer n from standard input or from a command-line argument.
  2. If n is 0, print 42.
  3. Otherwise, print the pair (x,x), where x is the text which would have been printed if n was one unit smaller. For example, the output for n = 3 should be "(((42,42),(42,42)),((42,42),(42,42)))".

With the important restriction that the pair (x, x) must first be constructed before being printed, and its representation must not have the same type as x.

An incorrect solution

The reason the restriction is important is that otherwise, it would be possible to implement the program using a single type, that of integer trees:

-- *not* a valid solution
data Tree a = Leaf a | Branch (Tree a) (Tree a)

showTree :: Show a => Tree a -> String
showTree (Leaf x)       = show x
showTree (Branch t1 t2) = printf "(%s,%s)" (showTree t1)
                                           (showTree t2)

printTree :: Tree Int -> Int -> IO ()
printTree v 0 = putStrLn (showTree v)
printTree v n = printTree (Branch v v) (n-1)

main :: IO ()
main = readLn >>= printTree (Leaf 42)

That program does not demonstrate that it's possible to write a program which uses larger types than those you've written in the source code.

Haskell solution

Instead of using the same type Tree Int at every iteration, we want to construct a sequence of larger and larger types:

  1. Int
  2. (Int,Int)
  3. ((Int,Int),(Int,Int))
  4. (((Int,Int),(Int,Int)),((Int,Int),(Int,Int)))
  5. ...

In Haskell, this can be achieved via polymorphic recursion, meaning that we recur at a different type than the one which the current call is being instantiated at. For example, the call printTree 42 1 instantiates the type variable a = Int, while the recursive call printTree (42,42) 0 instantiates the type variable a = (Int,Int).

printTree :: Show a => a -> Int -> IO ()
printTree v 0 = print v
printTree v n = printTree (v,v) (n-1)

main :: IO ()
main = readLn >>= printTree 42

Polymorphic recursion is often used to recur on a smaller type, but since in this function it is the Int argument which is getting smaller, we can recur on a larger type without risking an infinite loop.

C++ solution

Speaking of infinite loops, C++ uses compile-time templates to handle polymorphic recursion, and this implementation strategy causes the compiler to instantiate more and more templates when we recur on a larger type. Eventually, gcc gives up with "template instantiation depth exceeds maximum of 900".

We can work around the problem by specializing the template at one of the types encountered before that limit, and printing an error message instead of recurring further.

#include <stdio.h>

template<typename A>
struct Pair {
  A fst;
  A snd;
  
  Pair(A fst, A snd)
  : fst(fst), snd(snd)
  {}
};

void print(int n) {
  printf("%d", n);
}

template<typename A>
void print(Pair<A> pair) {
  printf("(");
  print(pair.fst);
  printf(",");
  print(pair.snd);
  printf(")");
}

template<typename A>
void println(A value) {
  print(value);
  printf("\n");
}


template<typename A>
struct PrintTree {
  static void call(int depth, A value) {
    if (depth == 0) {
      println(value);
    } else {
      PrintTree<Pair<A> >::call(depth - 1, Pair<A>(value, value));
    }
  }
};

template<>
struct PrintTree<
  Pair<Pair<Pair<Pair<Pair<Pair<Pair<Pair<int>
> > > > > > > >
{
  static void call(int, Pair<
                          Pair<Pair<Pair<Pair<Pair<Pair<Pair<int>
                        > > > > > > >
  ) {
    fprintf(stderr, "maximum depth exceeded.\n");
  }
};

int main() {
  int depth;
  scanf("%d", &depth);
  
  PrintTree<int>::call(depth, 42);
  
  return 0;
}
Java solution

Other implementation strategies, such as Java's type erasure, need no such artificial bounds.

class Pair<A> {
  private A fst;
  private A snd;
  
  public Pair(A fst, A snd) {
    this.fst = fst;
    this.snd = snd;
  }
  
  public String toString() {
    return "(" + fst.toString() + "," + snd.toString() + ")";
  }
}

public class Main {
  public static <A> void printTree(int depth, A value) {
    if (depth == 0) {
      System.out.println(value.toString());
    } else {
      printTree(depth - 1, new Pair<A>(value, value));
    }
  }
  
  public static void main(String[] args) {
    Integer n = Integer.valueOf(args[0]);
    Integer m = 42;
    printTree(n, m);
  }
}
Conclusion

Many programming languages have the ability to work with larger types than those which are known at compile time, but for some reason, the feature is rarely used.

Perhaps one of the reasons is that the feature is rarely covered in tutorials. I have presented a small example demonstrating the feature, and I have demonstrated that the example isn't specific to one particular type system by implementing it in a few different languages. If you're writing a tutorial for a language and you have already covered "Hello World", "99 bottles of beer" and the "Hello World" of type systems, please consider also covering the "99 bottles of beer" of type systems.

Although, if I want this example to catch on, I should probably give it a better name. Maybe "Complete trees whose leaves are 42", or simply "Complete 42" for short?

Monday, December 08, 2014

How to package up binaries for distribution

This weekend, I wrote a game (in Haskell of course!) for Ludum Dare, an event in which you have 48h or 72h to create a game matching an imposed theme. It was really challenging!

Once the event was over, it was time to package my game in a form which others could play. Since the packaging procedure wasn't obvious, I'm documenting it here for future reference. The procedure isn't specific to Haskell, but I'll mention that linking Haskell programs statically, as advised around the web, didn't work for me on any platform.

Windows


While your program is running, use Process Explorer to list the .dll files your program is currently using (There is also Dependency Walker, but on my program it missed glut32.dll). Copy those DLLs to the same folder as your executable, zip the folder, and ship it.

OS X


Use otool -L to list the .dylib files on which your executable depends, and copy them to the same folder as your executable (or a libs subfolder). Use install_name_tool to change all the dylib paths embedded in your executable to @executable_path/foo.dylib (or @executable_path/libs/foo.dylib). Zip the folder, and ship it.

Linux


Use ldd to list the .so files on which your executable depends, and copy all of them except libc.so.X to the same folder as your executable (or a libs subfolder). Add ld-options: -Wl,-rpath -Wl,$ORIGIN (or ld-options: -Wl,-rpath -Wl,$ORIGIN/libs) to your cabal file, pass those flags directly to gcc, or use chrpath to change the existing RPATH if there is one. Zip the folder, and ship it.

Tuesday, October 28, 2014

Understanding "Strongly-typed Bound", part 1

First, giving credits where credit is due. The Bound library is written by Edward Kmett, and so is the strongly-typed variant I want to explore in this series. I learned about the strongly-typed version via a comment by geezusfreeek, in response to a question by _skp.

I have a lot to say about this script, and since the first thing I want to say about it involves writing down some typing rules, I thought I'd write them on the whiteboard and publish a video! Please let me know what you think of this new format.


Saturday, September 06, 2014

Prisms lead to typeclasses for subtractive types

In my last post, I identified some issues with subtractive types, namely that math identities such as ∀ a. a + -a = 0, once they are translated into Haskell, would not be valid for all a. More precisely, the existence of a function of type

cancelSub :: forall a. (a :+: Negative a) -> Zero

would make it easy to implement a contradiction, regardless of the way in which we represent Negative a:

type a :+: b = Either a b
type Zero = Void

contradiction :: Void
contradiction = cancelSub (Left ())

I concluded by blaming the unconstrained forall. That is, I was hoping that the identity could be saved by finding some typeclass C such that C a => (a :+: Negative a) -> Void would be inhabited, or something along those lines. But what should C look like?

Prisms

Earlier today, I was re-listening to Edward Kmett on Lenses, in the first Haskell Cast episode. While explaining Prisms at 38:30, Kmett explained that a Lens' s a splits an s into a product consisting of an a and of something else, and that correspondingly, a Prism' s a splits an s into a sum consisting of an a and of something else. It occurred to me that the first "something else" should be s :/: a, while the second "something else" should be s :-: a.

Since Prism' s a is only inhabited for some combinations of s and a but not others, I thought a good choice for my C typeclass might be a proof that there exists a prism from s to a.

cancelSub :: HasPrism a (Negative a)
          => (a :+: Negative a) -> Void

That is, instead of restricting which types can be negated, let's restrict which types are allowed to appear together on the left- and right-hand sides of a subtractive type.

Four typeclasses

All right, so what should the HasPrism typeclass look like? In the podcast, Kmett explains that we can "construct the whole thing out of the target of a prism", and that we can pattern-match on the whole thing to see if it contains the target. In other words:

class HasPrism s a where
    construct :: a -> s
    match :: s -> Maybe a

This Maybe a discards the case I am interested in, the s :-: a. Let's ask the typeclass to provide a representation for this type, so we can give a more precise type to match.

class HasPrism s a where
    type s :-: a
    constructLeft :: a -> s
    constructRight :: (s :-: a) -> s
    match :: s -> Either a (s :-: a)

Our typeclass now has three methods, for converting back and forth between s and its two alternatives. We can combine those three methods into a single bijection, and with this final transformation, we obtain a form which is easily transferable to the other inverse types:

class Subtractable a b where
    type a :-: b
    asSub :: Iso a ((a :-: b) :+: b)

class Divisible a b where
    type a :/: b
    asDiv :: Iso a ((a :/: b) :*: b)

class Naperian b a where
    type Log b a
    asLog :: Iso a (Log b a -> b)

class Rootable n a where
    type Root n a
    asRoot :: Iso a (n -> Root n a)
Routing around the contradiction

The real test for these new definitions is whether they allow us to define constructive versions of the math identities for subtraction, division, logarithms and roots. Once annotated with the proper type class constraint, does cancelSub still lead to a contradiction? If not, can it be implemented?

It can!

type Negative a = Zero :-: a

cancelSub :: forall a. Subtractable Zero a
          => Iso (a :+: Negative a) Zero
         -- a :+: Negative a
cancelSub = swap
         -- Negative a :+: a
        >>> inverse iso
         -- Zero
  where
    iso :: Iso Zero (Negative a :+: a)
    iso = asSub

The math version of the constrained type is still ∀ a. a + -a = 0, but with a new proviso "whenever -a exists". It's still the same identity, it's just that with real numbers, -a always exists, so the proviso does not usually need to be spelled out.

In the world of types, Negative a does not always exist. If fact, there's only one possible instance of the form Subtractable Zero a:

instance Subtractable Zero Zero where
    type Zero :-: Zero = Zero
    
    asSub :: Iso Zero ((Zero :-: Zero) :+: Zero)
    asSub = Iso Right (either id id)

In other words, in the world of types, the proviso "whenever -a exists" simply means "when a = 0".

Other identities

I wish I could say that all the other identities become straightforward to implement once we add the appropriate typeclass constraints, but alas, this is not the case. I plan to discuss the remaining issues in a subsequent post.

For now, I am content to celebrate the fact that at least one contradiction has been slain :)

Friday, August 29, 2014

Edward Kmett likes my library :)

I have be re-listening to old episodes of the Haskell Cast, and it turns out I missed something really, shall we say, relevant to my interests.



In the very first episode, Edward Kmett talks about lens and a few of his other libraries. Then, near the end, he is asked about interesting Haskell stuff aside from his libraries. His answer, at 59:45:

"There was a really cool Commutativity monad [...] that really struck me as an interesting approach to things, I thought it was particularly neat toy."
— Edward Kmett


Yay, that's my library! And here are Wren's blog posts he mentions, about generalizing my approach.

Monday, August 18, 2014

Issues with subtractive types

I tried to expand on my earlier attempt at understanding root types, but I hit several obstacles. Here is a summary of those obstacles, in case future-me or someone else feels like trying again.

Goals
The conversation began with Naperian types, and the way in which they satisfy the same laws as mathematical logarithms, except at the type level. For example, the law logb xy = logb x + logb y would translate to a function of type

logProduct :: Log b (a1, a2) -> Either (Log b a1) (Log b a2)

where, as usual, pairs represent multiplication and Either represents a sum. We would also expect a function with the converse type, and we would expect the two functions to form an isomorphism.

My goal would be to find implementations for Log and the other inverse types such that the corresponding isomorphisms exist and are useful. As the rest of the post will demonstrate, "exists" is already quite a strong requirement.

I should mention before moving on that yes, I am familiar with "The Two Dualities of Computation: Negative and Fractional Types", and I am intentionally using a different approach. Their non-deterministic invertible language in quite interesting, but ultimately too weird for me.

I would prefer to find a way to implement negative and fractional stuff as Haskell datatypes, or failing that, to understand why it can't be done. Today's post is about the latter: if negative and fractional types can exist in Haskell, then identities such as 0 ↔ x + -x wouldn't be valid for all types x, like they are in that paper.

Naïve definitions
Since subtraction is the inverse of addition, I tried to define a subtractive type and the other inverses types in term of the types we already have.

type a :+: b = Either a b
type a :*: b = (a, b)

data ab :-: b where
  MkSub :: a -> (a :+: b) :-: b

data ab :/: b where
  MkDiv :: a -> (a :*: b) :/: b

data Log b a where
  MkLog :: p -> Log b (p -> b)

data Root n a where
  MkRoot :: b -> Root n (n -> b)

One obvious problem with those definitions is that they don't support any of the math identities, except for the ones used in the definitions themselves: x + y - y = x, etc. For most types a, the type a :-: b isn't even inhabited, so while we might be able to implement absurd-style isomorphisms, the result would be completely useless.

Isomorphisms to the rescue
In the reddit comment linked at the top of this post, I worked around the problem via the hypothesis that maybe we shouldn't expect math identities to correspond to type isomorphisms so directly. Instead, I postulated an extra operator which would lift an isomorphism between a and a' to an isomorphism between a :-: b and a' :-: b, and similarly for the right-hand side and for the other type operators. It worked well enough, but since this transformation isn't constructive, we still don't get useful isomorphic functions at the end.

So, can we make the transformation constructive? Something like this:

data a :-: b where
  MkSub :: r -> Iso a (r :+: b) -> a :-: b

data a :/: b where
  MkDiv :: r -> Iso a (r :*: b) -> a :/: b

data Log b a where
  MkLog :: p -> Iso a (p -> b) -> Log b a

data Root n a where
  MkRoot :: b -> Iso a (n -> b) -> Root n a

By using id as the Iso, we can construct the same inhabitants as with the previous definitions. In addition, we can now constructively lift an isomorphism on the left- or right-hand side to an isomorphism on the whole type. The code for doing this is a bit longer than I'd like, but the idea is that since isomorphisms can already be lifted to either side of a (:+:), (:*:), or (->), we should therefore be able to concatenate an isomorphism for the left- or right-hand side with the existing Iso. For example:

liftRightAdd :: forall a b b'
              . Iso b b'
             -> Iso (a :+: b) (a :+: b')
liftRightAdd isoB = ...

liftRightSub :: forall a b b'
              . Iso b b'
             -> Iso (a :-: b) (a :-: b')
liftRightSub isoB = Iso fwdS bwdS
  where
    fwdS :: a :-: b -> a :-: b'
    fwdS (MkSub r iso) = MkSub r (liftIso iso)
      where
        liftIso :: Iso a (r :+: b) -> Iso a (r :+: b')
        liftIso iso = liftRightAdd isoB . iso
    
    bwdS :: a :-: b' -> a :-: b
    bwdS (MkSub r iso) = MkSub r (liftIso iso)
      where
        liftIso :: Iso a (r :+: b') -> Iso a (r :+: b)
        liftIso iso = liftRightAdd (inverse isoB) . iso)

With those tools, we should be able to take a non-constructive transformation like the one I wrote in my reddit comment:

log_b(a1) + log_b(a2) ~ Either (Log b a1) (Log b a2)
                     ≈≈ Either (Log b (p1 -> b))
                               (Log b (p2 -> b))
                      ≈ Either p1 p2
                      ≈ Log b (Either p1 p2 -> b)
                     ≈≈ Log b (p1 -> b, p2 -> b)
                     ≈≈ Log b (a1, a2)
                      ~ log_b(a1*a2)

And translate it into a constructive version:

addLogs :: forall a1 a2 b. Iso (Log b a1 :+: Log b a2)
                               (Log b (a1 :*: a2))
          -- Log b (a1 :*: a2)
addLogs = liftRightLog (liftBothMul (inverse asExp1)
                                    (inverse asExp2))
          -- Log b ((p1 -> b) :*: (p2 -> b))
        . liftRightLog expSum
          -- Log b (p1 :+: p2 -> b)
        . inverse logExp
          -- p1 :+: p2
        . liftBothAdd logExp logExp
          -- Log b (p1 -> b) :+: Log b (p2 -> b)
        . liftBothAdd (liftRightLog asExp1)
                      (liftRightLog asExp2)
          -- Log b a1 :+: Log b a2
  where
    asExp1 :: Iso a1 (P1 -> b)
    asExp2 :: Iso a2 (P2 -> b)

expSum :: Iso (a1 :+: a2 -> b)
              ((a1 -> b) :*: (a2 -> b))
logExp:: Iso (Log b (p -> b))
             p

Success? Not so fast.

Impossible isomorphisms
In order to execute the above constructive proof, we must of course implement all the smaller isomorphisms on which it is based. Two of them, asExp1 and asExp2, seem pretty silly: can we really expect any type a1 to be isomorphic to a function of the form p1 -> b, for any type b of our choosing?

I had originally postulated that I could do this because in my original definition for Log b a1, log types were only inhabited when a1 had the required form p1 -> b. With the new Iso-based definition, I'm no longer sure I can do this, and even with the old definition, it was only ever justified to transform Log b a1 into Log b (p1 -> b), not to transform a naked a1 into p1 -> b.

However, if we simply pick p1 = Log b a1, then the math identity blogb x = x justifies the principle. Can we write a constructive version of this identity?

One direction is easy:

mkExpLog :: a -> (Log b a -> b)
mkExpLog x (MkLog p (Iso fwd _)) = fwd x p

But the other direction direction is impossible. It would allow us to implement unsafeCoerce!

unExpLog :: (Log b a -> b) -> a

unsafeCoerce' :: b -> a
unsafeCoerce' = unExpLog . const

Similar issues occur with identities from the other inverse types. With a constructive version of the identity x - x = 0, for example, we can construct an inhabitant for the empty type:

unSubSelf :: a :-: a -> Void

subSelf :: [()] :-: [()]
subSelf = MkSub () (Iso fwdL bwdL)
  where
    fwdL :: [()] -> () :+: [()]
    fwdL []      = Left ()
    fwdL (():xs) = Right xs
    
    bwdL :: () :+: [()] -> [()]
    bwdL (Left  ()) = []
    bwdL (Right xs) = ():xs

bottom :: Void
bottom = unSubSelf subSelf

The fact that a recursive type is used in this counter-example hints at an explanation of the issue. The identity x - x = 0 is certainly true in math, for any number x. So at least for types with exactly x inhabitants, we would expect the isomorphism to hold. But in this case, the type [()] has infinitely-many inhabitants, and as we know from math, ∞ - ∞ is not zero, it's an indeterminate form.

Here is another implementation of the empty type, based on the identities x / x = 1 and x / y = x * (1/y):

mkDivSelf :: () -> a :/: a
mkTimesReciprocal :: a :/: b
                  -> a :*: (() :/: b)

divZero :: Void :/: Void
divZero = mkDivSelf ()

bottom' :: Void
bottom' = fst (mkTimesReciprocal divZero)

I haven't managed to derive a contradiction from any of the root identities, but they seem just as impossible to implement.

Restricted isomorphisms
Okay, so the indeterminate forms ∞ - ∞ and 0 / 0 both led to contradictions. In math, we work around the problem by saying that the division identities are only valid when the denominator is non-zero, and we don't even bother mentioning infinity because it's usually not a member of our universe of discourse, the real numbers for example.

In the world of types, instead of forbidding particular elements such as zero, it's much more common to require a witness proving that the types under consideration are valid for the current operation. In Agda, this would be a predicate on Set, while in Haskell, it would be a typeclass constraint. So, if future me is reading this and wants to continue exploring the world of inverse types, I leave him with the following recommendation: try looking for typeclass constraints under which the identities don't lead to contradictions, or even better, under which they can actually be implemented.

Saturday, August 16, 2014

Homemade FRP: mystery leak

The mystery leak is back, and this time it's personal: the bug is clearly in my code somewhere.

My first hypothesis is that since I am emulating reactive-banana's API, and I now have the same kind of leak as I had with the real reactive-banana, maybe it's the same bug with the same solution? Nope, adding a <* stepper undefined events for each kind of event does not plug the leak.

My next attempt is to simplify: what is the simplest program I can write which reproduces the leak?

main :: IO ()
main = play (InWindow "Nice Window" (200, 200) (800, 200))
            white
            30
            (pure (Circle 10))
            currentValue
            (\_ -> handleEventB ())
            (\_ -> handleEventB ())
Obtained using the "-hy" flag; see Real World Haskell's chapter on profiling.
The rest of the profile information is useless: it says all the space is used by "MAIN".

That's as simple as main could get while still using gloss and my homemade FRP implementation. I already know that a pure (Circle 10) behaviour doesn't leak with gloss and the real reactive-banana, so clearly, the bug must be in my homemade version.

In order to simplify further, let's bring in the FRP primitives.

data Behavior t a = Behavior
  { currentValue :: a
  , runBehavior :: t -> Behavior t a
  }

pure :: a -> Behavior t a
pure x = Behavior x (\_ -> pure x)

handleEventB :: t -> Behavior t a -> Behavior t a
handleEventB = flip runBehavior

main :: IO ()
main = play (InWindow "Nice Window" (200, 200) (800, 200))
            white
            30
            (pure (Circle 10))
            currentValue
            (\_ -> handleEventB ())
            (\_ -> handleEventB ())
Still leaks; as expected, since I didn't really change anything yet.

I'm not using Behavior's ability to accept input events, so I can simplify further:

data Behavior t a = Behavior
  { currentValue :: a
  , runBehavior :: Behavior t a
  }

pure :: a -> Behavior t a
pure x = Behavior x (pure x)

main :: IO ()
main = play (InWindow "Nice Window" (200, 200) (800, 200))
            white
            30
            (pure (Circle 10))
            currentValue
            (\_ -> runBehavior)
            (\_ -> runBehavior)
I don't know why the colours have changed, but clearly, it's still leaking.
(spoiler alert: the colour change will turn out to be important)

Usually, space leaks are due to insufficient strictness. I waste a bit of time adding bangs and seqs here and there, but of course, the application is already strict: in order to display each Picture, gloss is forced to fully-evaluate each step.

Still, just to validate that gloss really is doing the simple iterative loop I think it is doing, I replace play with my own loop.

main :: IO ()
main = loop (pure (Circle 10))
  where
    loop b = do
        print (currentValue b)
        loop (runBehavior b)

Compared with the previous graph, it looks like the memory is now high all the time,
but look at the Y axis: the memory usage is now constant, and much smaller than before.

Interesting, the leak is gone! Could the leak be in gloss after all? To make absolutely sure, I go back to my full, unsimplified program, and I replace its top-level by a similar loop.

main :: IO ()
main = loop (reactiveMain floats inputs)
  where
    floats = inputEvents
    inputs = never
    
    loop b = do
        print (currentValue b)
        loop (handleEventB 1.0 b)

This time, the colour change is expected, because the program is completely different.
But did you expect the leak to be back?

Nope, it's not gloss. Or at least, it's not just gloss: I observe a leak when I use a gloss loop with a simple update function (G and ¬U), I also observe a leak when use a simple loop with a complex update function (¬G and U), but I don't see a leak when I use the simple loop with the simple update function (¬G and ¬U). Therefore, there is probably a leak in gloss (G), and a second leak in the complex update function (U). That's why the graph colours changed: the different programs are illustrating different leaks.

I begin with the complex update function; since I wrote all the code, I should be able to understand the problem better.

The update function leak
The profile data is very different this time, and also very useful. It's no longer Behavior which uses up all the space: it's lists, and the profiling report (obtained with "-p") even tells us where to look:

COST CENTRE     MODULE    %time %alloc

main.loop       Main       78.0   75.8
accumE.go.(...) PureFrp     5.9    8.2
filterE.go      PureFrp     2.1    4.3
fmap            PureFrp     2.0    2.6
accumE.go       PureFrp     1.5    2.7
mappend         PureFrp     1.3    1.3
accumE.go.xs    PureFrp     1.1    1.5

accumE is indeed manipulating lists: it uses scanl to produce a list of intermediate values, obtained by accumulating all the events occurring this frame. Let's add some strictness to that list:

accumE :: a -> Event t (a -> a) -> Event t a
accumE x e = Event go
  where
    go t = x' `seq` (xs', accumE x' e')
      where
        (fs, e') = runEvent e t
        xs = scanl (flip ($)) x fs
        x' = last xs
        xs' = tail xs  -- skip the initial unmodified x

Leak fixed. Next!

One leak down, one to go. Before we move on, though, I'd like to understand why strictness is needed in this particular function.

Let's see, each time a Float event comes in, reactiveMain is called and its current value is fully evaluated. Somewhere inside reactiveMain, there is a call to accumE, and without the extra strictness annotation, we know that accumeE's current value (the xs list and the x' it contains) doesn't get evaluated. Why not?

There is more than one use of accumE in reactiveMain, but the one in firstEvent looks particularly fishy:

firstEvent :: Event t a -> Event t a
firstEvent = fmap snd
           . filterE ((== 1) . fst)
           . numberEvents

numberEvents uses accumE to pair each event with an incrementing number. But for all events except the first one, we only evaluate the numeric half of the pair; could it be that the right-hand side is accumulating thunks?

Given the fact that a single seq was sufficient to solve the problem, that seems unlikely. After all, seq only forces x' to WHNF; depending on the implementation of numberEvents, that might either mean evaluating the outer pair or the incrementing number, but definitely not the inner members of the pair.

And indeed, if we revert the seq fix and remove firstEvent from the equation, the leak returns:
firstEvent is not the problem here.

There is another occurrence of numberEvents inside clickEvents. This location is interesting because I have noticed that clicking on a button causes a lot of extraneous memory to be released.

clickEvents :: Event t (Char, Int)
clickEvents = fmap swap
            $ numberEvents
            $ clickLabels

To explain the memory release, Float and click events must somehow be evaluated differently: one accumulates thunks, while the other evaluates them. And there is indeed a big difference: clickEvents ignores all the Float events, and only reacts to clicks. So how is ignoring events leading to thunk accumulation?

Well, ignoring events simply means filtering them out of the list of input events, which means that most of the time, clickEvents is going to be receiving an empty list as input and producing an empty list as output. This empty list gets fully-evaluated on every frame, in order to confirm that there is nothing to update on the screen, but that doesn't mean that all the thunks get forced: x', for example, is not a member of this empty list.

Let's look at the code for accumE again:
accumE :: a -> Event t (a -> a) -> Event t a
accumE x e = Event go
  where
    go t = (xs', accumE x' e')
      where
        (fs, e') = runEvent e t
        xs = scanl (flip ($)) x fs
        x' = last xs 
        xs' = tail xs  -- skip the initial unmodified x

scanl always returns a non-empty list starting with x, which is why we can call partial functions like last and tail without fear. Also, since x is passed through without being examined, any thunks accumulated to compute x will be passed through without being evaluated. In this case, it's the calls to last which will accumulate.

The reason the memory gets released when a button is clicked is that x is also involved in the computation of the elements of the output list, so as soon as this output list becomes non-empty and its elements get fully-evaluated, so does x. With the new strictness annotation, just evaluating the output list to WHNF is now sufficient to evaluate the next x to WHNF, thereby fixing the leak.

The gloss leak
Let's look into the second leak now, the one which occurs when I use gloss with a simple update function.

data Behavior t a = Behavior
  { currentValue :: a
  , runBehavior :: Behavior t a
  }

pure :: a -> Behavior t a
pure x = Behavior x (pure x)

main :: IO ()
main = play (InWindow "Nice Window" (200, 200) (800, 200))
            white
            30
            (pure (Circle 10))
            currentValue
            (\_ -> runBehavior)
            (\_ -> runBehavior)

If it's really a bug in gloss, then I should be able to reproduce it without any FRP-related stuff at all. Simplified as it is, a Behaviour t a is isomorphic to [a], and pure x isomorphic to repeat x. Can gloss handle infinite lists?

main :: IO ()
main = simulate (InWindow "Nice Window" (200, 200) (800, 200))
                white
                30
                [0..]
                (renderInt . head)
                (\_ _ -> tail)
Nope, it can't.

Okay, so this is an extremely simple gloss program exhibiting a clear memory leak, so even if I can't find the leak myself, I should be able to send this example program to the gloss developers as a bug report. Still, where does the profiler say that the issue is?

COST CENTRE   MODULE                                 %time %alloc

MAIN          MAIN                                    89.2  71.8
main          Main                                    6.7    0.0
CAF           Graphics.Rendering.(...).Functions      1.4    0.0
CAF           GHC.IO.Encoding                         1.2    0.0
callbackIdle  Graphics.Gloss.Internals.(...).Idle     0.6   13.7
drawComponent Graphics.Gloss.Internals.(...).Picture  0.3    9.5
fromList      Main                                    0.2    4.8

It looks like callbackIdle would be a good place to start. Looking at the code, this cost center is actually a function named callback_simulate_idle. Here is a slightly simplified version:

-- | Called when we're finished drawing
-- and it's time to do some computation.
callback_simulate_idle
 :: IORef SM.State  -- ^ the simulation state
 -> IORef world   -- ^ the current world
 -> world   -- ^ the initial world
 -> (Float -> world -> IO world) -- ^ advance the world
 -> IO ()
 
callback_simulate_idle simSR worldSR worldStart worldAdvance
 = {-# SCC "callbackIdle" #-}
   do simS <- readIORef simSR
 let result | SM.stateReset simS
            = simulate_reset simSR worldSR worldStart

            | SM.stateRun   simS
            = simulate_run   simSR worldSR worldAdvance
 
            | SM.stateStep  simS
            = simulate_step  simSR worldSR worldAdvance
 
            | otherwise
            = \_ -> return ()
 
 result

The memory is probably being allocated every time worldAdvance is called, but it's another value which catches my eye: worldStart? Why would the idle function need to remember the starting state? I see that it's being used when we want to "reset". I didn't know gloss allowed you to reset to the start state.

This reset feature explains the leak: in order to reset to the start state, gloss need to keep a pointer to the head of the infinite list, which prevents elements from being garbage-collected after being displayed.

Well, this is problematic. How to fix the leak while retaining the ability to reset the state? If the start state was obtained by calling a function, then the infinite lists returned by different invocations of this function would not be shared, thereby allowing the elements to be garbage-collected. But that's not the API gloss uses: play receive the start state directly, not as a wrapping function returning the start state. And creating our own function returning this constant state wouldn't help, because the function would hold a pointer to the head of the list, which would again prevent the garbage collection.

Anyway, I'm quite curious about this reset feature by now, so I look in the code for the key combination which triggers it, to no avail. In fact... it doesn't seem like the reset feature can be triggered at all? It's dead code! Along with plenty of other code which also looks dead. If fact, of the above four cases, only the simulate_run branch is ever taken.

This is weird, but it's actually great news for the memory leak. After removing all the dead code, the reference to worldStart is no longer kept alive after the first step, and the leak goes away.

The sweet plateau of fixed space leaks.
Conclusion
It can be hard to reason about a bug, be it a space leak or otherwise, when your assumptions are flawed. In this case, I had assumed that there was only one leak, when in fact there were two.

The key to uncovering incorrect assumptions is to run lots of sanity checks. In this case, after I had concluded that the leak had to be in gloss, I tested this hypothesis by removing gloss from the unsimplified version of my program, expecting the leak to disappear. When it didn't, I knew that one of my assumptions had to be flawed.

When that happens, one tedious but very effective weapon is to simplify the program: in a smaller program, there are fewer places for incorrect assumptions to hide. It also simplifies chains of reasoning, by not having lots of elements through which to chain.