Saturday, May 02, 2015

Strongly-typed ghci commands

The ghci documentation explains how to implement conditional breakpoints by conditionally generating ghci commands depending on the values of variables which are in scope at the breakpoint. The approach works, but is hard to implement correctly because ghci's commands are stringly-typed. In this post, I will present a strongly-typed DSL for writing such commands in a type safe way, and I will use it to reimplement conditional breakpoints and to implement data breakpoints.

Data breakpoint demo

For future reference, here is how to define data breakpoints in ghci.

:def databreak \checkData -> return (":set stop :cmd (" ++ checkData ++ ") >>= \\hasChanged -> if hasChanged then return \":set stop :list\\n:back\" else return \":step\"")

That was unreadable, but that's kind of the point: multi-stage ghci commands are complex, which is why we need a DSL to make sense of them. For now, here's how you can use :databreak to find the step at which foo writes 2 to the IORef.

initIORef :: IO (IORef Int)
initIORef = newIORef 0

hasWrittenTwo :: IORef Int -> IO Bool
hasWrittenTwo ref = do
    value <- readIORef ref
    return (value == 2)

foo :: IORef Int -> [Int] -> IO ()
foo ref = mapM_ $ \x -> do
    print x
    writeIORef ref x

> ref <- initIORef
> :databreak (hasWrittenTwo ref)
> :step (foo ref [5,4,3,2,1])
[lots of forward stepping...]
foo ref = mapM_ $ \x -> do
    print x
    writeIORef ref x
> x

By the end of the post, we'll be able to reimplement :databreak in a less succinct, but much more understandable style.

The status quo: stringly-typed commands

If you search for "conditional breakpoints" in the ghci documentation, you will see the following magic incantation (which I have split into multiple lines for readability):

:def cond \expr -> return (":cmd if (" ++ expr ++ ") \
                                \then return \"\" \
                                \else return \":continue\"")
:set stop 0 :cond (x < 3)

The meaning of the above is a multi-stage computation:

  1. When breakpoint 0 is encountered, execute the command :cond (x < 3).
  2. (x < 3) looks like an expression, but is actually the string "(x < 3)", which is passed to the definition of cond above.
  3. The body of cond is an IO String computation. It returns a string which represents a ghci command.
  4. That ghci command is: :cmd if ((x < 3)) then return "" else return ":continue".
  5. The argument to :cmd is an expression of type IO String involving the variable x, which is assumed to be in scope in the code where breakpoint 0 is.
  6. The IO computation returns one of two strings depending on the runtime value of x.

    If x is less than three,

    1. The condition for our conditional breakpoint has been triggered, so we should stop. The empty string is returned.
    2. This empty string is interpreted as a ghci command which does nothing. The multi-stage computation stops, as desired.

    If x is greater than or equal to three,

    1. The condition for our conditional breakpoint has not yet been triggered, so we should continue. The string ":continue" is returned.
    2. This string is interpreted as the ghci command :continue, which continues the execution until the next breakpoint is encountered. Go to 1.

There is a good reason why ghci is using strings to represent commands: as some of those commands are executed, the program being debugged progresses, which causes variables like x to enter the scope and others to leave it. Thus, at the moment when a command is defined, we don't yet have enough information to verify whether its definition is well-scoped.

Creating these kinds of multi-stage computations is a bit of a pain, because everything is a string and as a Haskell programmer, I need more precise types than that in order to reason about my program. It would be too easy to forget a :cmd or a return, producing a computation whose first stage type-checks, but which fails at runtime during the later stages.

> :{
> :def cond \expr -> return $ ":cmd if (" ++ expr ++ ") \
>                                  \then \"\" \
>                                  \else \":continue\""
> :}
> :set stop 0 :cond (x < 3)

The two commands above are accepted by ghci, because the expression given to :def has the required type String -> IO String, but we get a type error the first time the breakpoint is hit, much later than we'd like.

Making strings type-safe using phantom types

I have encountered this situation before while working on Hawk, an awk-like tool for manipulating text from the command-line via Haskell expressions. In the --map mode, Hawk applies the user expression to each line of stdin. This is implemented by embedding the user expression expr in the larger expression interact (unlines . map expr . lines), except with more complications due to other interfering features. To avoid making mistakes in this more complicated code, we came up with the following trick: instead of representing an expression as a raw string, we represent an expression of type a as a value of type S a. This is still a string under the hood, with an extra phantom type a documenting the purpose of the string's expression and allowing the type-checker to make sure we are using the string in the way we intended to.

data S a = S { runS :: String } deriving Show

To make use of S, I first need to define a few primitives corresponding to the many functions I might want to use inside my custom ghci commands. I need to be careful to specify the correct types here, as the type checker is not able to verify that the string I give corresponds to a function of the correct type. If I manage not to make any mistakes here, this will allow the type checker to prevent me from making mistakes later on, when combining those primitives into larger expressions.

sFmap :: Functor f => S ((a -> b) -> f a -> f b)
sFmap = S "fmap"

sReturn :: Monad m => S (a -> m a)
sReturn = S "return"

sIf :: S (a -> a -> Bool -> a)
sIf = S "\\t f b -> if b then t else f"

To combine those primitives, I can apply a string representing a function to a string representing an argument. The semantics for this operation is exactly the same as (<*>), but I cannot give an Applicative instance because my version of pure isn't polymorphic enough.

pureS :: Show a => a -> S a
pureS = S . show

(<.>) :: S (a -> b) -> S a -> S b
S f <.> S x = S ("(" ++ f ++ ") (" ++ x ++ ")")

Thanks to the type indices, Haskell can verify that I am constructing type-safe expressions, even though in the end I am still just concatenating strings.

-- Couldn't match expected type 'IO String' with actual type 'String'
sIncorrectExample :: S (Bool -> IO String)
sIncorrectExample = sIf <.> pureS ""
                        <.> pureS ":continue"

sCorrectedExample :: S (Bool -> IO String)
sCorrectedExample = sIf <.> (sReturn <.> pureS "")
                        <.> (sReturn <.> pureS ":continue")

> putStrLn $ runS sCorrectedExample
(\t f b -> if b then t else f) (return "") (return ":continue")

As the if..then..else example above demonstrates, computations expressed in terms of S primitives and (<.>) don't have access to the syntactic sugar we're accustomed to use in ordinary Haskell expressions. For this reason, it's often much more readable to define a new named function and to wrap it as a new primitive.

 namedExample :: Bool -> IO String
 namedExample True  = return ""
 namedExample False = return ":continue"
 sNamedExample :: S (Bool -> IO String)
 sNamedExample = S "namedExample"
Making ghci commands type-safe

There is another layer of stringly-typed imprecision in the above code: ":continue" is a plain old string, but not every string describes a valid ghci command. Let's create a newtype for those strings which do.

newtype C = C { runC :: String } deriving Show

No phantom type this time, since I'm not aware of any situation in which some commands would be allowed but not others.

continue :: C
continue = C ":continue"

back :: C
back = C ":back"

forward :: C
forward = C ":forward"

list :: C
list = C ":list"

step :: C
step = C ":step"

steplocal :: C
steplocal = C ":steplocal"

The :cmd and :def commands both expect a string argument, but since we know that this string is supposed to represent an IO computation, we should use S to specify that.

rawCmd :: S (IO String) -> C
rawCmd body = C $ ":cmd " ++ runS body

type Name = String
rawDef :: Name -> S (String -> IO String) -> C
rawDef name body = C $ ":def " ++ name ++ " " ++ runS body
A precisely-typed version of :cmd

We can do better than this. The IO computation given to :cmd doesn't return any string, it returns a string which represents a newline-separated sequence of commands.

import Data.List

-- compile from [C], the high-level representation of a list
-- of commands, to String, the low-level representation.
compileCmds :: [C] -> String
compileCmds = intercalate "\n" . fmap runC

sCompileCmds :: S ([C] -> String)
sCompileCmds = S "compileCmds"

We can thus make a safer version of rawCmd which asks for a computation returning a list of commands instead of a computation returning a string. We'll still be compiling to a string under the hood, but our users will have a much clearer picture of what they're doing.

cmd :: S (IO [C]) -> C
cmd body = rawCmd (sFmap <.> sCompileCmds <.> body)
A precisely-typed version of :def

For :def, it's a bit more difficult, because we would also like to give a precise type to the input string. It's a string given by the user, like the "(x < 3)" in our running example, which represents some expression. In the case of :cond, this is a boolean expression, and in order to keep things concrete, for now let's assume that this is always the case. Since the argument is a string representing a boolean expression, we can make its type more precise by using S Bool instead of String. So instead of S (String -> ...), we should use S (S Bool -> ...).

Yes, that's two nested S constructors. No, I'm not intentionally making this more complicated than it needs to be, I'm merely using the type system to make sense of the complexity which is already there. It means that :def's argument is a string representing a function which takes in a string which represents a boolean.

It's much easier to reason about this doubly-nested S if we first implement a function which compiles the precise representation S Bool -> IO [C] down to the imprecise representation String -> IO String, and then wrap that named function in an S constructor; the same trick we have used to improve readability.

-- using 'S a' instead of 'S Bool', to support any type of argument
compileDef :: (S a -> IO [C]) -> (String -> IO String)
compileDef body s = do
    cmds <- body (S s)
    return (compileCmds cmds)

sCompileDef :: S ((S a -> IO [C]) -> (String -> IO String))
sCompileDef = S "compileDef"

def :: String -> S (S a -> IO [C]) -> C
def name body = rawDef name (sCompileDef <.> body)
A strongly-typed version of :cond
> :{
> :def cond \expr -> return ":cmd if (" ++ expr ++ ")"
>     "then return \" \" "
>     "else return \":continue \""
> :}

Now that we have a type-safe version of :def, we can use it to rewrite the above definition of conditional breakpoints in a type-safe way.

defineCond :: C
defineCond = def "cond" sCond

cond :: S Bool -> IO [C]
cond expr = return [cmd (sCondHelper <.> expr)]

condHelper :: Bool -> IO [C]
condHelper True  = return []
condHelper False = return [continue]

sCond :: S (S Bool -> IO [C])
sCond = S "cond"

sCondHelper :: S (Bool -> IO [C])
sCondHelper = S "condHelper"

My version is more verbose because it is divided into several helper functions, which makes sense for a multi-stage computation.

  1. There is the stage in which :def is called, with a string representing the second stage as an argument.
  2. Then there is the cond stage, in which a custom command is constructed by concatenating two strings: one representing a computation expecting a boolean and one representing a boolean.
  3. That computation is the next stage, condHelper, which uses the boolean to determine which commands to run next.
  4. Those commands are the final stage.

Note that it would not have been correct to define those helpers in a where clause, because then the function names would no longer be in scope when ghci needs to interpret the code inside the strings. The trick we are using for readability is polluting the global namespace, which I think is a good tradeoff in the context of a blog post, but I wouldn't do that in a library.

Why not a monad?

To the eyes of a Haskeller, there is one more obvious API improvement which can be made: abstracting over the mechanism which separates the different stages and wrapping it up under a monadic interface, thereby allowing a multi-staged computation to be expressed as a single unified monadic computation. With such an API, instead of translating the definition of :cond to a bunch of helper functions like we did above, we would be able to express it via something like this:

defCond :: DSL ()
defCond = def "cond" $ \sBool -> do
    bool <- interpret sBool
    if bool
    then return ()
    else executeC continue

Unfortunately, it's not possible to implement such a monadic interface. Consider the type signatures for interpret and bind:

interpret :: S a -> DSL a
(>>=) :: DSL a -> (a -> DSL b) -> DSL b

Since S a is represented as a string, there is no way to obtain the underlying value of type a in order to pass it to bind's second argument. Instead, we would need to embed that argument, the monadic continuation, inside the string which we will eventually pass on to ghci. This leads to the following variant of bind:

bindDSL :: DSL a -> S (a -> DSL b) -> DSL b

With an appropriate representation for DSL, it is certainly possible to implement bindDSL. But then for readability purposes, we'd still be separating the implementation of the second argument into a separate function, so the end result would still be separated into approximately one function per stage. For this reason, I'll skip the details of this pseudo-monadic implementation.

Implementing data breakpoints.

A data breakpoint is a breakpoint which triggers not when the program reaches a particular line, but when a particular reference is assigned a new value. In a language in which each variable points to its own memory address, this can be implemented efficiently using a hardware breakpoint. Since GHC's garbage collector moves data around, changing their memory addresses, we'll have to use a much slower approach.

I plan to repeatedly use the :step command to advance the computation a tiny bit, checking between every step whether the reference I am interested is now pointing to a different value.

A strongly-typed version of :set stop

If we use :step when there is no computation is being debugged, there is nothing to step through, and ghci prints an error message. Unfortunately this message is visible to the user but not to our command, so we cannot use this error to determine when to stop stepping. So instead, I will use :set stop to specify that my command should be executed each time the execution is stopped, whether after stepping or after encountering a breakpoint.

setStop :: C -> C
setStop c = C (":set stop " ++ runC c)

Since there is only one stop hook, attaching my command will replace any other command which might have already been attached to it. I would like to restore that command once I'm done, but since there is no way to ask ghci which command is currently attached, I'll have no choice but to reset it to a convenient default instead. Most people use :set stop :list, which displays the source which is about to be executed.

resetStop :: C
resetStop = setStop list
A strongly-typed version of :databreak

Now that we have built all of the scaffolding, the definition of :databreak is straightforward. We hook up a stop command so that each time we stop, we check whether the data has changed. If it has, we detach the hook. Otherwise, we step forward and if the computation hasn't finished yet, our hook will trigger again. As a result, we keep stepping forward until either the data changes or the computation terminates.

defineDatabreak :: C
defineDatabreak = def "databreak" sSetDataBreakpoint

setDataBreakpoint :: S (IO Bool) -> IO [C]
setDataBreakpoint sCheckData = return [ setStop
                                      $ cmd
                                      $ sCheckAgain <.> sCheckData

checkAgain :: IO Bool -> IO [C]
checkAgain checkData = do
    hasChanged <- checkData
    if hasChanged
    then onDataChanged
    else return [step]

-- go back one step, since it's the previous step
-- which caused the data to change
onDataChanged :: IO [C]
onDataChanged = return [resetStop, back]

sSetDataBreakpoint :: S (S (IO Bool) -> IO [C])
sSetDataBreakpoint = S "setDataBreakpoint"

sCheckAgain :: S (IO Bool -> IO [C])
sCheckAgain = S "checkAgain"

sOnDataChanged :: S (IO [C])
sOnDataChanged = S "onDataChanged"

Let's try it out:

> putStrLn $ runC $ defineDatabreak
:def databreak (compileDef setDataBreakpoint)
> :def databreak (compileDef setDataBreakpoint)

> ref <- initIORef
> :databreak (hasWrittenTwo ref)
> :step (foo ref [5,4,3,2,1])
[lots of forward stepping...]
foo ref = mapM_ $ \x -> do
    print x
    writeIORef ref x
> x

For convenience, I have expanded out and simplified the definition of defineDatabreak, thereby obtaining the easy-to-paste version from the top of the post.

Wednesday, January 28, 2015

Haxl anti-tutorial

It's time for another anti-tutorial! Whereas a tutorial is an advanced user giving step-by-step instructions to help newbies, an anti-tutorial is a new user describing their path to enlightenment. My approach is usually to follow the types, so my anti-tutorials are also examples of how to do that.

Previously in the series:

  1. pipes anti-tutorial
  2. reactive-banana anti-tutorial
  3. netwire anti-tutorial

Today, inspired by a question from Syncopat3d, I'll try to learn how to use Simon Marlow's Haxl library. I think Haxl is supposed to improve the performance of complicated queries which use multiple data sources, such as databases and web services, by somehow figuring out which parts of the query should be executed in parallel and which ones should be batched together in one request. Since Syncopat3d is looking for a way to schedule the execution of a large computation which involves running several external processes in parallel, caching the results which are used more than once, and batching together the processes which use the same input, Haxl seemed like a good fit!

Black triangle

To understand the basics of the library, I'd like to create a black triangle, that is, a trivial program which nevertheless goes through the whole pipeline. So as a first step, I need to figure out what the stages of Haxl's pipeline are.

Since I'm using a type-directed approach, I need some type signature from which to begin my exploration. Hunting around Haxl's hackage page for something important-looking, I find GenHaxl, "the Haxl monad". Despite the recent complaints about the phrase "the <something> monad", finding that phrase here is quite reassuring, as it gives me a good idea of what to expect in this package: a bunch of commands which I can string together into a computation, and some function to run that computation.

Thus, to a first approximation, the Haxl pipeline has two stages: constructing a computation, and then running it.

A trivial computation

Since GenHaxl is a monad, I already know that return 42 is a suitably trivial and valid computation, so all I need now is a function to run a GenHaxl computation.

That function is typically right after the definition of the datatype, and indeed, that's where I find runHaxl. I see that in addition to my trivial GenHaxl computation, I'll need a value of type Env u. How do I make one?

Clicking through to the definition of Env, I see that emptyEnv can make an Env u out of a u. Since there are no constraints on u so far, I'll simply use (). I fully expect to revisit that decision once I figure out what the type u represents in the type GenHaxl u a.

>>> myEnv <- emptyEnv ()
>>> runHaxl myEnv (return 42)


Good, we now have a base on which to build! Let's now make our computation slightly less trivial.

What's a data source?

There are a bunch of GenHaxl commands listed after runHaxl, but most of them seem to be concerned with auxiliary matters such as exceptions and caching. Except for one:

dataFetch :: (DataSource u r, Request r a) => r a -> GenHaxl u a

That seems to be our link to another stage of Haxl's pipeline: data sources. So the first stage is a data source, then we describe a computation which fetches from the data source, then finally, we run the computation.

So, I want an r a satisfying DataSource u r. Is there something simple I could use for r? The documentation for DataSource doesn't list any instances, so I guess I'll have to define one myself. Let's see, there is only one method to implement, fetch, and it uses both u and r. The way in which they're used should give me a hint as to what those type variables represent.

fetch :: State r
      -> Flags
      -> u
      -> [BlockedFetch r]
      -> PerformFetch

I find it surprising that neither u nor r seem to constrain the output type. In particular, u is again completely unconstrained, so I'll keep using (). The description of the u parameter, "User environment", makes me think that indeed, I can probably get away with any concrete type of my choosing. As for r, which seems to be the interesting part here, we'll have to look at the definitions for State and BlockedFetch to figure out what it's about.

class Typeable1 r => StateKey r Source
    data State r

data BlockedFetch r
  = forall a . BlockedFetch (r a) (ResultVar a)

Okay, so State r is an associated type in an otherwise-empty typeclass, so I can again pick whatever I want. BlockedFetch r is much more interesting: it has an existential type a, which ties the r a to its ResultVar a. The documentation for BlockedFetch explains this link very clearly: r a is a request with result a, whose result must be placed inside the ResultVar a. This explains why r wasn't constraining fetch's output type: this ResultVar is the Haskell equivalent of an output parameter. So instead of being a pure function returning something related to r, this fetch method must be an imperative computation which fills in its output parameters before returning to the caller. The type of fetch's return type, PerformFetch, is probably some monad which has commands for filling in ResultVars.

data PerformFetch = SyncFetch (IO ()) | ...

At least in the simple case, PerformFetch is a simple wrapper around IO (), so I guess ResultVar must be a simple wrapper around MVar or IORef.

A trivial data source

Anyway, we now have a clear idea of what r a is: a request whose result has type a. Let's create a simple data source, Deep Thought, which only knows how to answer a single request.

data DeepThought a where
    AnswerToLifeTheUniverseAndEverything :: DeepThought Int

I'm using a GADT so that each request can specify the type of its answer. For example, I could easily add a request whose answer is a string instead of a number:

data DeepThought a where
    AnswerToLifeTheUniverseAndEverything :: DeepThought Int
    QuestionOfLifeTheUniverseAndEverything :: DeepThought String

But of course, Deep Thought isn't powerful enough to answer that request.

We also know that fullfilling a request isn't done by returning an answer, but by assigning the answer to a ResultVar.

runDeepThought :: DeepThought a -> ResultVar a -> IO ()
runDeepThought AnswerToLifeTheUniverseAndEverything var
  = putSuccess var 42

Alright, let's try to make DeepThought an official data source by implementing the DataSource typeclass:

instance DataSource () DeepThought where
    fetch _ _ _ reqs = SyncFetch $
        forM_ reqs $ \(BlockedFetch req var) ->
          runDeepThought req var

There's also a bunch of other easy typeclasses to implement, see the next source link for details.

A trivial state

I now have everything I need for my dataFetch to compile...

>>> runHaxl myEnv (dataFetch AnswerToLifeTheUniverseAndEverything)
*** DataSourceError "data source not initialized: DeepThought"

...but the execution fails at runtime. Now that I think about it, it makes a lot of sense: even though I don't use it, fetch receives a value of type State DeepThought, but since this is a custom type and I haven't given any of its inhabitants to anything, there is no way for Haxl to conjure one up from thin air. There must be a way to initialize the state somehow.

I must say that I'm a bit disappointed by how imperative Haxl's API has been so far. Whether we're assigning values to result variables or initializing a state, correctness requires us to perform actions which aren't required by the types and thus can't be caught until runtime. This is unusual for a Haskell library, and if the rest of the API is like this, I'm afraid following the types won't be a very useful exploration technique.

Anyway, I couldn't find any function with "init" in the name, but by looking for occurences of State in the types, I figured out how to perform the initialization: via the environment u which I had left empty until now.

instance StateKey DeepThought where
    data State DeepThought = NoState

initialState :: StateStore
initialState = stateSet NoState stateEmpty

>>> myEnv <- initEnv initialState ()
>>> runHaxl myEnv (dataFetch AnswerToLifeTheUniverseAndEverything)


It worked! We have a trivial data source, we have a trivial expression which queries it, we can run our expression, and we obtain the right answer. That's our black triangle!

Multiple data sources, multiple requests

Next, I'd like to try a slightly more complicated computation. Syncopat3d gives the following example:

F_0(x, y, z) = E(F_1(x, y), F_2(y, z))

Here we clearly have two different data sources, E and F. Syncopat3d insists that E is computed by an external program, which is certainly possible since our data sources can run any IO code, but I don't think this implementation detail is particularly relevant to our exploration of Haxl, so I'll create two more trivial data sources.

data E a where
    E :: String -> String -> E String
  deriving Typeable

data F a where
    F_1 :: String -> String -> F String
    F_2 :: String -> String -> F String
  deriving Typeable

runE :: E a -> ResultVar a -> IO ()
runE (E x y) var = putSuccess var (printf "E(%s,%s)" x y)

runF :: F a -> ResultVar a -> IO ()
runF (F_1 x y) var = putSuccess var (printf "F_1(%s,%s)" x y)
runF (F_2 x y) var = putSuccess var (printf "F_2(%s,%s)" x y)

Since GenHaxl is a monad, assembling those three requests should be quite straightforward...

>>> runHaxl myEnv $ do
...     f1 <- dataFetch (F_1 "x" "y")
...     f2 <- dataFetch (F_2 "y" "z")
...     dataFetch (E f1 f2)



...but if I add a bit of tracing to my DataSource instances, I see that this computation is performed in three phases: F_1, F_2, then E.

>>> runHaxl myEnv ...
Computing ["F_1(x,y)"]
Computing ["F_2(y,z)"]
Computing ["E(F_1(x,y),F_2(y,z))"]


This is not the trace I was hoping to see. Since fetch is receiving a list of request/var pairs, I expected Haxl to send me multiple requests at once, in case my data source knows how to exploit commonalities in the requests. But it doesn't look like Haxl figured out that the F_1 and F_2 requests could be performed at the same time.

It turns out that this is a well-known problem with Haxl's monadic interface. I remember about it now, it was described in a presentation about Haxl (slide 45) when it came out. The solution is to use the Applicative syntax to group the parts which are independent of each other:

>>> runHaxl myEnv $ do
...     (f1,f2) <- liftA2 (,) (dataFetch (F_1 "x" "y"))
...                           (dataFetch (F_2 "y" "z"))
...     dataFetch (E f1 f2)
Computing ["F_2(y,z)","F_1(x,y)"]
Computing ["E(F_1(x,y),F_2(y,z))"]


Good, the F_1 and F_2 requests are now being performed together.


I don't like the way in which we have to write our computations. Consider a slightly more complicated example:

  E(F_1(x,y), F_2(y,z)),
  E(F_1(x',y'), F_2(y',z'))

Since the four F_1 and F_2 requests at the leaves are all independent, it would make sense for Haxl to batch them all together. But in order to obtain this behaviour, I have to list their four subcomputations together.

>>> runHaxl myEnv $ do
...     (f1,f2,f1',f2') <- (,,,) <$> (dataFetch (F_1 "x" "y"))
...                              <*> (dataFetch (F_2 "y" "z"))
...                              <*> (dataFetch (F_1 "x'" "y'"))
...                              <*> (dataFetch (F_2 "y'" "z'"))
...     (e1,e2) <- (,) <$> (dataFetch (E f1 f2))
...                    <*> (dataFetch (E f1' f2'))
...     dataFetch (E e1 e2)
Computing ["F_2(y',z')","F_1(x',y')","F_2(y,z)","F_1(x,y)"]
Computing ["E(F_1(x',y'),F_2(y',z'))","E(F_1(x,y),F_2(y,z))"]
Computing ["E(E(F_1(x,y),F_2(y,z)),E(F_1(x',y'),F_2(y',z')))"]


I feel like I'm doing the compiler's job, manually converting from the nested calls I want to write to the leaves-to-root, layered style I have to write if I want batching to work.

So I stopped working on my anti-tutorial and wrote a toy library which converts from one style to the other :)

...and when I came back here to show it off, I discovered that GenHaxl already behaved exactly like my library did! You just have to know how to define your intermediate functions:

f_1 :: GenHaxl () String -> GenHaxl () String -> GenHaxl () String
f_1 x y = join (dataFetch <$> (F_1 <$> x <*> y))

f_2 :: GenHaxl () String -> GenHaxl () String -> GenHaxl () String
f_2 x y = join (dataFetch <$> (F_1 <$> x <*> y))

e :: GenHaxl () String -> GenHaxl () String -> GenHaxl () String
e x y = join (dataFetch <$> (E <$> x <*> y))

And with those, we can now describe the computation as nested function calls, as desired.

>>> x = pure "x"
>>> y = pure "y"
>>> z = pure "z"
>>> x' = pure "x'"
>>> y' = pure "y'"
>>> z' = pure "z'"
>>> runHaxl myEnv $ e (e (f_1 x y) (f_2 y z))
...                   (e (f_1 x' y') (f_2 y' z'))
Computing ["F_1(y',z')","F_1(x',y')","F_1(y,z)","F_1(x,y)"]
Computing ["E(F_1(x',y'),F_1(y',z'))","E(F_1(x,y),F_1(y,z))"]
Computing ["E(E(F_1(x,y),F_1(y,z)),E(F_1(x',y'),F_1(y',z')))"]



I now understand Haxl's purpose much better. With the appropriate intermediate functions, Haxl allows us to describe a computation very concisely, as nested function calls. Haxl executes this computation one layer at a time: all of the leaves, then all the requests which only depend on the leaves, and so on. Within a single layer, the requests are subdivided again, this time according to their respective data sources. Finally, for a given data source, it is fetch's responsibility to find and exploit opportunities for reusing work across the different requests belonging to the same batch. There are also some features related to caching and parallelism which I didn't explore.

I also understand Haxl's implementation much better, having reimplemented part of it myself. In fact, I'd be interested in writing a follow-up post named "Homemade Haxl", in the same vein as my "Homemade FRP" series. What do you think? Are you more interested in watching me learn some new libraries, watching me reimplement libraries, or watching me implement new stuff? I'll be doing all three anyway, I just want to know which of those activities I should blog about :)

Really, your feedback would be greatly appreciated, as the only reason I started this anti-tutorial series in the first place is that my first write-up on understanding Pipes was so surprisingly popular. I've streamlined the format a lot since that first post, and I want to make sure I haven't lost any of the magic in the process!

Sunday, December 21, 2014

The "99 Bottles of Beers" of Type Systems

"Hello World" is a good first example program because it is small, but also because it encourages the reader to get into the driver's seat and take control of the program. Copy-paste the "Hello World" listing from a website, and you're just blindly following instructions. Change it to print "Hello Mom", and you're boldly taking your first step towards the unknown, into a world where it is now you who is giving the instructions.

New programmers need to take that step, because programming anything non-trivial requires taking your own decisions about how things are implemented. If your boss was taking all the decisions for you, you wouldn't be a programmer, you'd be a typist.

The "Hello World" of Type Systems

Once you become an experienced programmer, "Hello World" examples are still useful as an introduction to new languages and new systems. Once you have a working base, you can experiment by making small changes and verifying whether they work as you expect, or if you need to read more tutorials.

For type systems, I guess a "Hello World" program would be a small datatype/structure/class containing a few simple fields. The standard here isn't as well-established as with "Hello World", but describing a person is quite common:

data Person = Person
  { name :: String
  , age :: Int

I've used Haskell here, but regardless of the language in which I would have written this, you could still easily infer how to add a field for the person's height.

99 Bottles of Beer

A little down the road, another introductory program is "99 bottles of beer on a wall". This one teaches budding programmers another important lesson: it's possible to write a program which prints out more text than what you've written in its source code. More specifically, the program shows how to use a variable to abstract over the part of the text which varies from one iteration to the other, and how to use a loop to determine how many iterations to make and which value the variable should take in each one.

For type systems, a "99 bottles of beer" program would teach the same lesson: it's possible to write a program which uses larger types than those you've written in the source code. This is rarely needed, but it's possible! Even in a large, complicated application, you might have a manager of pools of worker threads processing lists of person values, but Manager (Pool (WorkerThread (List Person))) is still a fixed type which you write down explicitly in your program. It's as if you had abstracted out the number of beers to print, but then wrote explicit calls with n = 99, n = 98 and so on, instead of using a loop to generate the calls at runtime. Our "99 bottles of beer" example should generate types at runtime.

The "99 Bottles of Beer" of Type Systems

The simplest such example I could think of is as follows:

  1. Parse a non-negative integer n from standard input or from a command-line argument.
  2. If n is 0, print 42.
  3. Otherwise, print the pair (x,x), where x is the text which would have been printed if n was one unit smaller. For example, the output for n = 3 should be "(((42,42),(42,42)),((42,42),(42,42)))".

With the important restriction that the pair (x, x) must first be constructed before being printed, and its representation must not have the same type as x.

An incorrect solution

The reason the restriction is important is that otherwise, it would be possible to implement the program using a single type, that of integer trees:

-- *not* a valid solution
data Tree a = Leaf a | Branch (Tree a) (Tree a)

showTree :: Show a => Tree a -> String
showTree (Leaf x)       = show x
showTree (Branch t1 t2) = printf "(%s,%s)" (showTree t1)
                                           (showTree t2)

printTree :: Tree Int -> Int -> IO ()
printTree v 0 = putStrLn (showTree v)
printTree v n = printTree (Branch v v) (n-1)

main :: IO ()
main = readLn >>= printTree (Leaf 42)

That program does not demonstrate that it's possible to write a program which uses larger types than those you've written in the source code.

Haskell solution

Instead of using the same type Tree Int at every iteration, we want to construct a sequence of larger and larger types:

  1. Int
  2. (Int,Int)
  3. ((Int,Int),(Int,Int))
  4. (((Int,Int),(Int,Int)),((Int,Int),(Int,Int)))
  5. ...

In Haskell, this can be achieved via polymorphic recursion, meaning that we recur at a different type than the one which the current call is being instantiated at. For example, the call printTree 42 1 instantiates the type variable a = Int, while the recursive call printTree (42,42) 0 instantiates the type variable a = (Int,Int).

printTree :: Show a => a -> Int -> IO ()
printTree v 0 = print v
printTree v n = printTree (v,v) (n-1)

main :: IO ()
main = readLn >>= printTree 42

Polymorphic recursion is often used to recur on a smaller type, but since in this function it is the Int argument which is getting smaller, we can recur on a larger type without risking an infinite loop.

C++ solution

Speaking of infinite loops, C++ uses compile-time templates to handle polymorphic recursion, and this implementation strategy causes the compiler to instantiate more and more templates when we recur on a larger type. Eventually, gcc gives up with "template instantiation depth exceeds maximum of 900".

We can work around the problem by specializing the template at one of the types encountered before that limit, and printing an error message instead of recurring further.

#include <stdio.h>

template<typename A>
struct Pair {
  A fst;
  A snd;
  Pair(A fst, A snd)
  : fst(fst), snd(snd)

void print(int n) {
  printf("%d", n);

template<typename A>
void print(Pair<A> pair) {

template<typename A>
void println(A value) {

template<typename A>
struct PrintTree {
  static void call(int depth, A value) {
    if (depth == 0) {
    } else {
      PrintTree<Pair<A> >::call(depth - 1, Pair<A>(value, value));

struct PrintTree<
> > > > > > > >
  static void call(int, Pair<
                        > > > > > > >
  ) {
    fprintf(stderr, "maximum depth exceeded.\n");

int main() {
  int depth;
  scanf("%d", &depth);
  PrintTree<int>::call(depth, 42);
  return 0;
Java solution

Other implementation strategies, such as Java's type erasure, need no such artificial bounds.

class Pair<A> {
  private A fst;
  private A snd;
  public Pair(A fst, A snd) {
    this.fst = fst;
    this.snd = snd;
  public String toString() {
    return "(" + fst.toString() + "," + snd.toString() + ")";

public class Main {
  public static <A> void printTree(int depth, A value) {
    if (depth == 0) {
    } else {
      printTree(depth - 1, new Pair<A>(value, value));
  public static void main(String[] args) {
    Integer n = Integer.valueOf(args[0]);
    Integer m = 42;
    printTree(n, m);

Many programming languages have the ability to work with larger types than those which are known at compile time, but for some reason, the feature is rarely used.

Perhaps one of the reasons is that the feature is rarely covered in tutorials. I have presented a small example demonstrating the feature, and I have demonstrated that the example isn't specific to one particular type system by implementing it in a few different languages. If you're writing a tutorial for a language and you have already covered "Hello World", "99 bottles of beer" and the "Hello World" of type systems, please consider also covering the "99 bottles of beer" of type systems.

Although, if I want this example to catch on, I should probably give it a better name. Maybe "Complete trees whose leaves are 42", or simply "Complete 42" for short?

Monday, December 08, 2014

How to package up binaries for distribution

This weekend, I wrote a game (in Haskell of course!) for Ludum Dare, an event in which you have 48h or 72h to create a game matching an imposed theme. It was really challenging!

Once the event was over, it was time to package my game in a form which others could play. Since the packaging procedure wasn't obvious, I'm documenting it here for future reference. The procedure isn't specific to Haskell, but I'll mention that linking Haskell programs statically, as advised around the web, didn't work for me on any platform.


While your program is running, use Process Explorer to list the .dll files your program is currently using (There is also Dependency Walker, but on my program it missed glut32.dll). Copy those DLLs to the same folder as your executable, zip the folder, and ship it.


Use otool -L to list the .dylib files on which your executable depends, and copy them to the same folder as your executable (or a libs subfolder). Use install_name_tool to change all the dylib paths embedded in your executable to @executable_path/foo.dylib (or @executable_path/libs/foo.dylib). Zip the folder, and ship it.


Use ldd to list the .so files on which your executable depends, and copy all of them except to the same folder as your executable (or a libs subfolder). Add ld-options: -Wl,-rpath -Wl,$ORIGIN (or ld-options: -Wl,-rpath -Wl,$ORIGIN/libs) to your cabal file, pass those flags directly to gcc, or use chrpath to change the existing RPATH if there is one. Zip the folder, and ship it.

Tuesday, October 28, 2014

Understanding "Strongly-typed Bound", part 1

First, giving credits where credit is due. The Bound library is written by Edward Kmett, and so is the strongly-typed variant I want to explore in this series. I learned about the strongly-typed version via a comment by geezusfreeek, in response to a question by _skp.

I have a lot to say about this script, and since the first thing I want to say about it involves writing down some typing rules, I thought I'd write them on the whiteboard and publish a video! Please let me know what you think of this new format.

Saturday, September 06, 2014

Prisms lead to typeclasses for subtractive types

In my last post, I identified some issues with subtractive types, namely that math identities such as ∀ a. a + -a = 0, once they are translated into Haskell, would not be valid for all a. More precisely, the existence of a function of type

cancelSub :: forall a. (a :+: Negative a) -> Zero

would make it easy to implement a contradiction, regardless of the way in which we represent Negative a:

type a :+: b = Either a b
type Zero = Void

contradiction :: Void
contradiction = cancelSub (Left ())

I concluded by blaming the unconstrained forall. That is, I was hoping that the identity could be saved by finding some typeclass C such that C a => (a :+: Negative a) -> Void would be inhabited, or something along those lines. But what should C look like?


Earlier today, I was re-listening to Edward Kmett on Lenses, in the first Haskell Cast episode. While explaining Prisms at 38:30, Kmett explained that a Lens' s a splits an s into a product consisting of an a and of something else, and that correspondingly, a Prism' s a splits an s into a sum consisting of an a and of something else. It occurred to me that the first "something else" should be s :/: a, while the second "something else" should be s :-: a.

Since Prism' s a is only inhabited for some combinations of s and a but not others, I thought a good choice for my C typeclass might be a proof that there exists a prism from s to a.

cancelSub :: HasPrism a (Negative a)
          => (a :+: Negative a) -> Void

That is, instead of restricting which types can be negated, let's restrict which types are allowed to appear together on the left- and right-hand sides of a subtractive type.

Four typeclasses

All right, so what should the HasPrism typeclass look like? In the podcast, Kmett explains that we can "construct the whole thing out of the target of a prism", and that we can pattern-match on the whole thing to see if it contains the target. In other words:

class HasPrism s a where
    construct :: a -> s
    match :: s -> Maybe a

This Maybe a discards the case I am interested in, the s :-: a. Let's ask the typeclass to provide a representation for this type, so we can give a more precise type to match.

class HasPrism s a where
    type s :-: a
    constructLeft :: a -> s
    constructRight :: (s :-: a) -> s
    match :: s -> Either a (s :-: a)

Our typeclass now has three methods, for converting back and forth between s and its two alternatives. We can combine those three methods into a single bijection, and with this final transformation, we obtain a form which is easily transferable to the other inverse types:

class Subtractable a b where
    type a :-: b
    asSub :: Iso a ((a :-: b) :+: b)

class Divisible a b where
    type a :/: b
    asDiv :: Iso a ((a :/: b) :*: b)

class Naperian b a where
    type Log b a
    asLog :: Iso a (Log b a -> b)

class Rootable n a where
    type Root n a
    asRoot :: Iso a (n -> Root n a)
Routing around the contradiction

The real test for these new definitions is whether they allow us to define constructive versions of the math identities for subtraction, division, logarithms and roots. Once annotated with the proper type class constraint, does cancelSub still lead to a contradiction? If not, can it be implemented?

It can!

type Negative a = Zero :-: a

cancelSub :: forall a. Subtractable Zero a
          => Iso (a :+: Negative a) Zero
         -- a :+: Negative a
cancelSub = swap
         -- Negative a :+: a
        >>> inverse iso
         -- Zero
    iso :: Iso Zero (Negative a :+: a)
    iso = asSub

The math version of the constrained type is still ∀ a. a + -a = 0, but with a new proviso "whenever -a exists". It's still the same identity, it's just that with real numbers, -a always exists, so the proviso does not usually need to be spelled out.

In the world of types, Negative a does not always exist. If fact, there's only one possible instance of the form Subtractable Zero a:

instance Subtractable Zero Zero where
    type Zero :-: Zero = Zero
    asSub :: Iso Zero ((Zero :-: Zero) :+: Zero)
    asSub = Iso Right (either id id)

In other words, in the world of types, the proviso "whenever -a exists" simply means "when a = 0".

Other identities

I wish I could say that all the other identities become straightforward to implement once we add the appropriate typeclass constraints, but alas, this is not the case. I plan to discuss the remaining issues in a subsequent post.

For now, I am content to celebrate the fact that at least one contradiction has been slain :)

Friday, August 29, 2014

Edward Kmett likes my library :)

I have be re-listening to old episodes of the Haskell Cast, and it turns out I missed something really, shall we say, relevant to my interests.

In the very first episode, Edward Kmett talks about lens and a few of his other libraries. Then, near the end, he is asked about interesting Haskell stuff aside from his libraries. His answer, at 59:45:

"There was a really cool Commutativity monad [...] that really struck me as an interesting approach to things, I thought it was particularly neat toy."
— Edward Kmett

Yay, that's my library! And here are Wren's blog posts he mentions, about generalizing my approach.