Friday, November 03, 2017

Computing with Impossible Types

Edward Kmett recently posted a puzzling gist seemingly showing that at the type level, the () kind has more than one inhabitant. The goal of this post is to explain what's going on.

Stuck Type Expressions

Here is a simple type family.

{-# LANGUAGE TypeFamilies, UndecidableInstances #-}

type family F a where
  F (Maybe a) = [F a]
  F a         = a

Since F (Maybe Int) and [F Int] both evaluate to [Int], the following type-checks.

-- |
-- >>> :kind! F (Maybe Int) -> [F Int]
-- F (Maybe Int) -> [F Int] :: *
-- = [Int] -> [Int]
runFMaybeInt :: F (Maybe Int) -> [F Int]
runFMaybeInt = id

We didn't use any Int-specific code, so let's make the type more polymorphic.

-- |
-- >>> :set -XRankNTypes
-- >>> :kind! forall b. F (Maybe b) -> [F b]
-- forall b. F (Maybe b) -> [F b] :: *
-- = [F b] -> [F b]
runFMaybe :: Proxy b -> F (Maybe b) -> [F b]
runFMaybe _ = id

Notice that F (Maybe b) and [F b] both evaluate to [F b], not to [b]! That's because we don't yet know whether b is going to be instantiated with a Maybe something or not, so unlike F Int, the type expression F b cannot be simplified further. The evaluation of F b is stuck, and will remain so until we learn more information about b. The code still type-checks though, because even though we don't know which concrete type F b will expand to, we do know that [F b] and [F b] will expand to the same type because they are the same type expression.

Pattern-Matching on the Shape

Here is another type family.

type family G a where
  G (f a) = [G a]
  G a     = Double

This time, the type family isn't pattern-matching on whether or not its input type is a Maybe something, but on whether or not it is a type which, like Maybe Int, consists of a type constructor applied to a type. Let's look at a concrete example:

-- |
-- >>> :kind! G (Maybe Int) -> [G Int]
-- G (Maybe Int) -> [G Int] :: *
-- = [Double] -> [Double]
runGMaybeInt :: G (Maybe Int) -> [G Int]
runGMaybeInt = id

No surprises there. Let's make the type more polymorphic:

-- |
-- >>> :kind! forall g b. G (g b) -> [G b]
-- forall g b. G (g b) -> [G b] :: *
-- = [G b] -> [G b]
runGMaybe :: Proxy (g b) -> G (g b) -> [G b]
runGMaybe _ = id

As before, the type expression G b is stuck because we don't yet know whether b is going to be instantiated to a type with the right shape such as Maybe Int, or to a type with a different shape such as Int. But regardless of which one it is, [G b] and [G b] will both expand to the same type, so the implementation type-checks.

One last example:

>>> :kind! forall b. G (G b) -> [G b]
forall b. G (G b) -> [G b] :: *
= G (G b) -> [G b]

Note that G (G b) did not simplify! G b might look like it has the right shape to match g b, but it doesn't, because G is a type family, not a type constructor. It's a good thing it doesn't match, because if it did, evaluating type expressions like G (G Int) wouldn't be confluent! If we evaluate the outer application first, we get [G Int] and then [Double], whereas if we evaluate the inner application first, we get G Double and then Double.

To be clear, evaluating the outer application first doesn't work because we don't yet know whether the type expression G Int will evaluate to something of the form f a or not. So the inner application is evaluated first, and G (G Int) evaluates to Double.

Two Arrow-Like Kinds

G and Maybe both seem to have kind * -> *:

>>> :kind! G
G :: * -> *
>>> :kind! Maybe
Maybe :: * -> *

But that's misleading, because there are some circumstances in which a type of kind * -> * is expected but only Maybe will be accepted:

class MyMonad (m :: * -> *)

-- ok
instance MyMonad Maybe

-- error: The type family ‘G’ should have 1 argument,
--        but has been given none
instance MyMonad G

And there are other circumstances in which both G and Maybe will be accepted:

{-# LANGUAGE DataKinds, TypeOperators #-}

-- |
-- >>> :kind! FMap Maybe '[Int, Double]
-- FMap Maybe '[Int, Double] :: [*]
-- = '[Maybe Int, Maybe Double]
--
-- >>> :kind! FMap G '[Int, Double]
-- FMap G '[Int, Double] :: [*]
-- = '[Double, Double]
type family FMap (f :: * -> *) as where
  FMap f '[]       = '[]
  FMap f (a ': as) = (f a ': FMap f as)

So I prefer to pretend that there are two different arrow-like kind constructors:

  1. (-->) for type functions which can be applied to a type argument. G and Maybe both have kind * --> *. Just like it isn't possible to pattern-match on a type variable, it is not possible to pattern-match on a type expression whose head is a (-->), we must instead apply the type function and pattern-match on the result.
  2. (->) for type constructors which can be pattern-matched on. Maybe has kind * -> *, but G does not. Clearly, (->) is a subtype of (-->).

Now we can make sense of the previous examples. Instance resolution works by pattern-matching on types, so MyMonad expects a * -> *, not a * --> *. Since G has the wrong kind, it cannot be given a MyMonad instance. FMap, on the other hand, only needs to apply its f to various as, so it expects an * --> * such as G. Since * -> * is a subtype of * --> *, FMap can also be applied to Maybe.

edit:
Apparently :kind! is misleading here. Outside of :kind!, FMap accepts Maybe but not G. So the situation is simpler than I thought: (-->) is for type families, (->) is for type constructors, and those are completely different arrow kinds, there is no subtyping relationship between the two. There is no way to ask for an argument of kind * --> *, because if we try to pass an "unsaturated" argument with that kind, G for example, GHC will complain that G is missing arguments. So MyMonad and FMap both expect an argument of kind * -> *, not * --> *.

Unusual Type Families

Here are a few surprising, yet legal type families.

-- |
-- >>> :kind! H1 ('Just '())
-- H1 ('Just '()) :: * -> *
-- = Maybe
type family H1 (a :: Maybe ()) :: * -> * where
  H1 ('Just a) = Maybe
  H1 'Nothing  = IO

H1's input has kind Maybe (), not *, and its output has kind * -> *, not *. Note that it's really * -> *, not * --> *, so G is not a valid output. Overall, the kind of H1 is thus Maybe () --> * -> *.

-- |
-- >>> :kind! H2
-- H2 :: *
-- = Int
type family H2 where
  H2 = Int

H2 has no type parameters, so it's kind is *, not * --> *. If it returned Maybe instead of Int, its kind would be * -> * instead. A type family's kind can be either * --> * or * -> * depending on how it's defined, so it's not as simple as "type constructors use (->), type families use (-->)".

Combining both ideas together:

type family J :: () -> Maybe () where
  J = 'Just

J's kind is () -> Maybe (), so it has to return a type constructor which accepts a type of kind () and produces a type of kind Maybe (). There are only two types which have the kind Maybe (): the type 'Nothing, and the type 'Just '(). 'Nothing has the wrong kind, since it doesn't accept a type of kind (), but 'Just is just right.

One last complication:

-- |
-- >>> :kind! H3 Int
-- H3 Int :: *
-- = H3 Int
type family H3 a where

H3 has no equations defining what happens when it is applied to a type argument. As a result, the type expression H3 Int remains stuck even though it doesn't contain any type variables.

Combining everything together:

type family Succ :: () -> () where

Succ pretends that it can produce a type constructor which accepts a type of kind () and produces a type of kind (). This is ridiculous! We know that '() is the only type of kind (), and like 'Nothing, it has the wrong kind because it doesn't accept a type of kind (). There are no valid types which Succ could return, so unsurprisingly it has no equations, and so Succ is a type expression which always remains stuck.

Ignoring Impossible Types

The type expression Succ '() is stuck, but well-kinded. It has kind ().

That's the kind which 'Just :: () -> Maybe () expects. Thus, 'Just (Succ '()) is also stuck and well-kinded. It has kind Maybe ().

That's the kind which our H1 :: Maybe () --> * -> * type family from earlier expects. Is H1 ('Just (Succ '())) stuck as well?

>>> :kind! H1 ('Just (Succ '()))
H1 ('Just (Succ '())) :: * -> *
= Maybe

Not stuck! That's because H1 ignores the part which is stuck. Its pattern is ('Just a), so it pattern-matches on the 'Just constructor, but it ignores its argument. If its pattern was ('Just '()) instead, it would have been stuck.

Here comes the clever part: it is possible to write a type family which pattern-matches on the '() but ignores the stuck Succ part.

-- |
-- >>> :kind! IsSucc (Succ '())
-- IsSucc (Succ '()) :: Bool
-- = 'True
--
-- >>> :kind! IsSucc '()
-- IsSucc '() :: Bool
-- = 'False
type family IsSucc (a :: ()) :: Bool where
  IsSucc (succ '()) = 'True
  IsSucc '()        = 'False

The trick is to do like G and pattern-match on the shape, not the contents.

Computing with Impossible Types

It is also possible to distinguish the two inhabitants using a typeclass instead of a type family:

{-# language FlexibleInstances #-}
import Data.Proxy

-- |
-- >>> isSucc (Proxy :: Proxy (Succ '()))
-- True
-- >>> isSucc (Proxy :: Proxy '())
-- False
class IsSucc (a :: ()) where
  isSucc :: Proxy a -> Bool

instance IsSucc (succ '()) where
  isSucc _ = True

instance IsSucc '() where
  isSucc _ = False

The fact that this works is surprising, because () is supposed to be a closed kind with only one inhabitant, '(), and yet here we seemingly have a second inhabitant, Succ '(), which can be distinguished from '() even though it is stuck. And as you might surmise from its name, we can manufacture many more inhabitants: Succ (Succ '()), Succ (Succ (Succ '())), etc.

{-# language ScopedTypeVariables #-}

-- |
-- >>> countSuccs (Proxy :: Proxy '())
-- 0
-- >>> countSuccs (Proxy :: Proxy (Succ '()))
-- 1
-- >>> countSuccs (Proxy :: Proxy (Succ (Succ (Succ '()))))
-- 3
class CountSuccs (a :: ()) where
  countSuccs :: Proxy a -> Int

instance CountSuccs '() where
  countSuccs _ = 0

instance CountSuccs a => CountSuccs (succ a) where
  countSuccs _ = 1 + countSuccs (Proxy :: Proxy a)

Those examples show how to compute booleans and integers from a stuck type expression containing Succs. Using polymorphic recursion, it is also possible to go the other way, from an integer to a stuck type expression containing that many Succs:

{-# language RankNTypes #-}

-- |
-- >>> mkSuccs 42 countSuccs
-- 42
mkSuccs :: Int -> (forall a. CountSuccs a => Proxy a -> r) -> r
mkSuccs 0 cc = cc (Proxy :: Proxy '())
mkSuccs n cc = mkSuccs (n - 1) $ \(Proxy :: Proxy a)
            -> cc (Proxy :: Proxy (Succ a))

Since Haskell doesn't have dependent types, the output type is independent of the integer, so we cannot directly return the stuck type as an output. Instead, we use continuation-passing-style to accept a polymorphic continuation which produces an r regardless of which stuck type we instantiate it at.

When we use countSuccs as the continuation, this r is an integer, and the integer it computes is the number of Succs. So we start with n, we convert it to a stuck type containing n Succs, and then we count those Succs and get n back. This is a very simple example of a computation which relies on the existence of those seemingly-impossible non-'() inhabitants of () in order to compute its result: if there was only one type of kind (), the integer would be lost during the conversion to Proxy (a :: ()), and we would not be able to get that same integer back at the end.

Full Circle

Now that we have seen and understood each of the pieces individually, we are now ready to marvel at Kmett's creation:

{-# language PolyKinds #-}

import Data.Proxy

class KnownUnit (k :: ()) where
   reflect :: Proxy k -> Int

instance KnownUnit '() where
  reflect _ = 0

instance KnownUnit x => KnownUnit (f x) where
  reflect _ = 1 + reflect (Proxy :: Proxy x)

type family Succ :: k -> k

-- |
-- >>> reify 23 reflect
-- 23
reify :: Int -> (forall k. KnownUnit k => Proxy k -> r) -> r
reify 0 f = f (Proxy :: Proxy '())
reify n f = reify (n - 1) $ \(Proxy :: Proxy k)
         -> f (Proxy :: Proxy (Succ k))

Neat!

...or worrisome?

Accepting Impossible Types

We Haskellers like to use precise types in order to make illegal states unrepresentable. We accept, reluctantly, that ⊥ inhabits all types, so () doesn't really have exactly one possible value. But it does have exactly one possible total value, and if we write a function whose type signature expects a (), that's the value which this function expects to receive. And so, most functions don't document what their behaviour is on ⊥ inputs, and nobody complains, because they know they're not supposed to use ⊥ inputs.

DataKinds allows us to use precise kinds, and thus to make illegal types unrepresentable. We don't often think about them, but stuck type expressions also inhabit all kinds, so there isn't really only one type of kind (). Today we saw that some of those extra inhabitants are really weird. That's an interesting quirk of Haskell's type system, but ultimately, I don't think those weird inhabitants are any more worrisome than their less exotic cousins, the stuck type expressions which contain type variables. After all, there is only one total type of kind (), and when we write a type-level function (or an instance) which expects a (), that's the type we expect.

Saturday, October 14, 2017

Composing Declarations in Template Haskell

I have recently tried to use Template Haskell to generate both a datatype and lenses for accessing the fields of this datatype, and it was harder than it should have been. In this post, I will demonstrate the problem, I will pinpoint its cause, and I will propose a solution.

The Problem

Consider the following code. I'm using a simple, contrived example instead of a more realistic one because it will be easier to write Template Haskell code for generating this silly code than it would be to write Template Haskell code generating lenses and such.

class Default a where
  def :: a


data X = X
data Y = Y X

data DoubledX = DoubledX
data DoubledY = DoubledY X X

instance Default X        where def = X
instance Default Y        where def = Y def
instance Default DoubledX where def = DoubledX
instance Default DoubledY where def = DoubledY def def

Most of that code is boilerplate, and I would like to generate that boilerplate using Template Haskell. I hasten to note that Template Haskell is a tool of last resort, to be used only when none of Haskell's many other abstraction facilities would have sufficed. In this example, I would probably use some generics library to define a default implementation of def for any algebraic datatype:

{-# LANGUAGE DefaultSignatures, DeriveGeneric, FlexibleContexts #-}
import Generics.Eot

class Default a where
  def :: a
  
  default def :: (HasEot a, Default (Eot a)) => a
  def = fromEot def

instance Default () where
  def = ()

instance (Default a, Default b) => Default (a, b) where
  def = (def, def)

instance Default a => Default (Either a b) where
  def = Left def


instance Default X
instance Default Y
instance Default DoubledX
instance Default DoubledY

This works fine, but today I want to talk about one of Template Haskell's limitations, so let's write a Template Haskell implementation instead.

{-# LANGUAGE TemplateHaskell #-}
import Data.List
import Language.Haskell.TH

-- > data Foo = Bar Int String
-- > generateDefault ''Foo
--
-- generates
--
-- > instance Default Foo where def = Bar def def
generateDefault :: Name -> Q [Dec]
generateDefault name = do
  --      data Foo =             Bar   Int...
  TyConI (DataD _ _ _ _ (NormalC cname cargs:_) _) <- reify name
  
  -- Bar def...
  let expr = foldl' (\c _ -> [|$c def|]) (conE cname) cargs
  
  [d|instance Default $(conT name) where def = $expr|]


data X = X
data Y = Y X

data DoubledX = DoubledX
data DoubledY = DoubledY X X

generateDefault ''X
generateDefault ''Y
generateDefault ''DoubledX
generateDefault ''DoubledY

In addition to the Default instances, we can also generate the Doubled datatypes, they are a version of the original datatype which has two copies of each field:

-- > data Foo = Bar Int String
-- > generateDoubled ''Foo
--
-- generates
--
-- > data DoubledFoo = DoubledBar Int String Int String
generateDoubled :: Name -> Q [Dec]
generateDoubled name = do
  --    data Foo =             Bar   Int...
  TyConI (DataD _ _ _ _ (NormalC cname cargs:_) _) <- reify name
  
  let cons = [NormalC (doubledName cname) (cargs ++ cargs)]
  pure [DataD [] (doubledName name) [] Nothing cons []]

doubledName :: Name -> Name
doubledName = mkName . ("Doubled" ++) . nameBase


data X = X
data Y = Y X

generateDoubled ''X
generateDoubled ''Y

generateDefault ''X
generateDefault ''Y
generateDefault ''DoubledX
generateDefault ''DoubledY

So, we can write a Template Haskell function which generates a datatype, and we can write one which generates an instance for that datatype. But can we write one which generates both the datatype and its instance? Both of our functions are Q actions which produce a [Dec], so it looks like a no brainer: we can simply run both Q actions one after the other and concatenate the resulting lists.

generateBoth :: Name -> Q [Dec]
generateBoth name = (++) <$> generateDoubled name
                         <*> generateDefault (doubledName name)


data X = X

-- error: ‘DoubledX’ is not in scope at a reify
generateBoth ''X

Sadness, it does not work :(

The Cause

The reason DoubledX is not in scope when generateDefault calls reify ''DoubledX is that the DoubledX datatype is not created as a side-effect of the generateDoubled ''X action, but as a side-effect of splicing the resulting [Dec] into the surrounding code. When concatenating the two lists, this doesn't happen until after both lists have been generated, and so DoubledX cannot be "reified" while generating the second list.

I didn't thought I'd ever say something like this, but: this pure API was a mistake, an API based on side-effects would be better! I'll qualify that statement though: since reify obtains information about datatypes (and other named entities) via a side-effect, namely reading from some global symbol table, I think there should be a corresponding action for adding new names to this table. As we have seen, with the current API in which names are added by returning a pure [Dec] value, declaration templates don't compose, so I think that API was a mistake.

I should note that there is, in fact, an action for adding new declarations as a side-effect:

addTopDecls :: [Dec] -> Q ()

Unfortunately, as of this writing, addTopDecls is unable to add datatypes:

import Language.Haskell.TH.Syntax

generateBoth :: Name -> Q [Dec]
generateBoth name = do
  decs <- generateDoubled name
  addTopDecls decs
  generateDefault (doubledName name)


data X = X

-- error: Only function, value, annotation, and foreign import 
--        declarations may be added with addTopDecl
-- error: ‘DoubledX’ is not in scope at a reify
generateBoth ''X

So the real solution would be to fix the implementation so it also supports datatypes, but until then, I have a simpler solution.

Simple Solution

We can't change declaration-splicing's Q [Dec] API, but if we create our own API, we can write an adapter which turns it into a Q [Dec].

{-# LANGUAGE GeneralizedNewtypeDeriving #-}

newtype LocalQ a = LocalQ { unLocalQ :: StateT [Dec] Q a }
  deriving (Functor, Applicative, Monad, MonadFail)

runLocalQ :: LocalQ () -> Q [Dec]
runLocalQ = flip execStateT [] . unLocalQ

The idea is that in addition to Q's global symbol table, LocalQ also has access to a local symbol table holding the declarations which have been added within the LocalQ computation.

addLocalDecls :: [Dec] -> LocalQ ()
addLocalDecls decls = LocalQ $ modify (++ decls)

reifyLocallyFirst :: Name -> LocalQ Info
reifyLocallyFirst name = do
  decls <- LocalQ get
  case find (hasName name) decls of
    Just dec -> pure $ TyConI dec
    Nothing  -> liftQ $ reify name

-- for this simple example, I'm only interested
-- in datatype declarations
hasName :: Name -> Dec -> Bool
hasName expected (DataD _ actual _ _ _ _) = actual == expected
hasName _        _                        = False

liftQ :: Q a -> LocalQ a
liftQ = LocalQ . lift

If we reimplement our declaration templates to use this new, better API...

locallyGenerateDefault :: Name -> LocalQ ()
locallyGenerateDefault name = do
  info <- reifyLocallyFirst name
  let TyConI (DataD _ _ _ _ (NormalC cname cargs:_) _) = info
  let expr = foldl' (\c _ -> [|$c def|]) (conE cname) cargs
  decls <- liftQ [d|instance Default $(conT name) where def = $expr|]
  addLocalDecls decls

locallyGenerateDoubled :: Name -> LocalQ ()
locallyGenerateDoubled name = do
  info <- reifyLocallyFirst name
  let TyConI (DataD _ _ _ _ (NormalC cname cargs:_) _) = info
  let cons = [NormalC (doubledName cname) (cargs ++ cargs)]
  addLocalDecls [DataD [] (doubledName name) [] Nothing cons []]

...then this time we can compose them just fine:

locallyGenerateBoth :: Name -> LocalQ ()
locallyGenerateBoth name = do
  locallyGenerateDoubled name
  locallyGenerateDefault (doubledName name)


data X = X
data Y = Y X

runLocalQ $ locallyGenerateDefault ''X
runLocalQ $ locallyGenerateDefault ''Y
runLocalQ $ locallyGenerateBoth ''X
runLocalQ $ locallyGenerateBoth ''Y

Happiness, it works! Now all that's left is to convince everybody to rewrite their declaration templates using LocalQ instead of Q, and we'll finally be able to reuse each other's code.

Final Solution

Okay, so that last part is probably not going to happen. If only there was a way to monkey-patch existing Q code so it would use reifyLocallyFirst instead of reify...

Well, here's a little-known fact about Q:

newtype Q a = Q { unQ :: forall m. Quasi m => m a }

That's right, Q isn't some magic Monad which only the compiler can execute! It's a concrete type, which we can examine and manipulate however we want. The finally-tagless encoding might be a bit intimidating, but in practice, a Q a value is basically an AST listing which Q actions need to be performed in order to produce an a. So we should be able to dive in and replace all the reify calls with reifyLocallyFirst calls, no problem.

The finally-tagless way to do that is to write a Quasi instance which instantiates reify with reifyLocallyFirst, and delegates all the other operations to some other Quasi instance:

instance Quasi LocalQ where
  qReify              = reifyLocallyFirst
  qAddTopDecls        = addLocalDecls
  
  -- Most of those aren't being exercised by my simple example,
  -- so I can afford to use 'undefined' for the trickier methods.
  qGetQ                   = undefined
  qPutQ                   = undefined
  qRecover                = undefined
  qAddDependentFile   x   = liftQ $ qAddDependentFile   x
  qAddModFinalizer    x   = liftQ $ qAddModFinalizer    x
  qExtsEnabled            = liftQ $ qExtsEnabled
  qIsExtEnabled       x   = liftQ $ qIsExtEnabled       x
  qLocation               = liftQ $ qLocation
  qLookupName         x y = liftQ $ qLookupName         x y
  qNewName            x   = liftQ $ qNewName            x
  qReifyAnnotations   x   = liftQ $ qReifyAnnotations   x
  qReifyConStrictness x   = liftQ $ qReifyConStrictness x
  qReifyFixity        x   = liftQ $ qReifyFixity        x
  qReifyInstances     x y = liftQ $ qReifyInstances     x y
  qReifyModule        x   = liftQ $ qReifyModule        x
  qReifyRoles         x   = liftQ $ qReifyRoles         x
  qReport             x y = liftQ $ qReport             x y
  qRunIO              x   = liftQ $ qRunIO              x

With this instance, I can now transform Q [Dec] templates into LocalQ () templates, and the transformed version will use reifyLocallyFirst instead of reify.

localize :: Q [Dec] -> LocalQ ()
localize declarationTemplate = do
  decls <- unQ declarationTemplate
  addLocalDecls decls

generateBoth :: Name -> Q [Dec]
generateBoth name = runLocalQ $ do
  localize $ generateDoubled name
  localize $ generateDefault (doubledName name)


data X = X
data Y = Y X

generateDefault ''X
generateDefault ''Y
generateBoth ''X
generateBoth ''Y

Notice that I am reusing the original generateDefault and generateDoubled implementations, those which caused the reify error when I first tried to implement generateBoth. I am not using the locallyGenerateDefault and locallyGenerateDoubled reimplementations from the previous section. This means that (with a more fleshed out implementation of LocalQ), I should be able to reuse any existing declaration template out there, including Control.Lens.TH.makeLenses! :D

Saturday, December 26, 2015

A whirlwind tour of Haskell, day 8

Day 8, functionally: pattern-matching

Today's puzzle involves parsing: we are given a string literal containing escape sequences, and we must interpret those escape sequences into the characters they represent. Or at least into the correct number of characters, whatever.

In my puzzle solutions so far, I have always pattern-matched at most one level deep, meaning that the pattern if any was always of the form (Constructor arg1 arg2 ...). Instead of binding those arguments to variables, I can also choose to nest more constructors. When pattern-matching on the list of characters of a string, nesting multiple patterns allows me to match on several characters at once, making it easy to recognize each of the three escape sequences specified in the puzzle using one pattern each.

import Data.Char

-- |
-- >>> print (parse "\"(\x20\\\"\x20)\"")
-- "( \" )"
parse :: String -> String
parse ('"':xs0) = go xs0
  where
    go []                  = error "unterminated string literal"
    go ('"'           :[]) = []
    go ('\\':'\\'     :xs) = '\\'      : go xs
    go ('\\':'"'      :xs) = '"'       : go xs
    go ('\\':'x':x1:x2:xs) = hex x1 x2 : go xs
    go (x             :xs) = x         : go xs
parse _ = error "not a string literal"

hex :: Char -> Char -> Char
hex c1 c2 = chr (16 * digitToInt c1 + digitToInt c2)

If we were writing an imperative implementation, we would probably have some kind of current index into the string which we would increment by 1, 2, or 4 characters depending on the pattern. It would be easy to accidentally increment that index by the wrong number of characters. By contrast, notice how the pattern-matching approach is less error-prone, as it naturally encourages us to give a name to the part of the string which isn't covered by each pattern.

Day 8.5, combinatorially: invertible-syntax

Once the second part is revealed, we can see that the complete puzzle consists of two related tasks: parsing a string literal into a string value, and pretty-printing a string value into a string literal. Whether imperative or functional, functions convert inputs to outputs, and so it would be natural to implement both directions as two separate functions. But why restrict ourselves to functions? While Haskell is best known for being a purely functional programming language, "functional" programming does not imply that everything is a function. In fact, we regularly construct other abstractions, new kinds of building blocks which can be composed into larger ones: the "combinator libraries" I briefly mentioned on Day 1. So let's put on our combinatorial glasses, step away from functions for a bit, and ask ourselves: What kind of building blocks would be useful for this problem? How can those building blocks be composed into larger building blocks?

The invertible-syntax library proposes the following approach: each building block shall know how to parse and to pretty print a value of a particular type, for example a character, and those building blocks can be combined into larger building blocks that know how to parse and pretty-print more complicated values, for example a string.

As usual, I'll write doctest comments above each definition to demonstrate what each building block does. Since I almost always want to demonstrate that a string s parses into a value x and that this same value x pretty-prints into the same string s, I'll use the following helper function to test both directions at once.

{-# LANGUAGE RankNTypes #-}
import Prelude hiding (print, pure, (<$>), (<*), (<*>), (*>))
import Text.Syntax
import Text.Syntax.Parser.Naive
import Text.Syntax.Printer.Naive

-- |
-- >>> test (pure 42) "" 42
-- True
-- >>> test (text "foo") "foo" ()
-- True
test :: Eq a => (forall f. Syntax f => f a) -> String -> a -> Bool
test block s x = (parsedValue == x) && (printedString == Just s)
  where
    parsedValue = head (parse block s)
    printedString = print block x

The two primitive building blocks I have demonstrated in the doctest comment are dual to each other: pure consumes no characters when parsing and always produces the same value, while text always consumes the same characters and doesn't produce anything useful as its value. They can be combined to describe "atoms", that is, a particular value which is represented using a particular string. Useful atoms for this puzzle include the backslash and double quote characters, which are represented using their respective escape sequences.

atom :: (Eq a, Syntax f) => String -> a -> f a
atom s x = pure x <* text s

-- |
-- >>> test backslash "\\\\" '\\'
-- True
backslash :: Syntax f => f Char
backslash = atom "\\\\" '\\'

-- |
-- >>> test doubleQuote "\\\"" '\"'
-- True
doubleQuote :: Syntax f => f Char
doubleQuote = atom "\\\"" '\"'

The hexadecimal escape sequence is a bit more complicated. I'll build it out of smaller atoms, one for each hexadecimal digit:

type Word4 = Int
type Word8 = Int

-- |
-- >>> test (hexDigits !! 12) "c" 12
-- True
hexDigits :: Syntax f => [f Word4]
hexDigits = [atom [intToDigit i] i | i <- [0..15]]

oneOf :: Syntax f => [f a] -> f a
oneOf = foldr (<|>) empty

-- |
-- >>> test hexDigit "a" 10
-- True
hexDigit :: Syntax f => f Word4
hexDigit = oneOf hexDigits

I've used the binary choice combinator (<|>) from invertible-syntax to construct an n-ary choice operator oneOf acting on lists, and I used it to combine my atoms into a more useful building block hexDigit which can parse and pretty-print any hex digit.

Next, I'd like to combine two of those into a full byte, but with only the invertible-syntax combinators, I can only combine them into a pair of nibbles (a number between 0 and 15):

-- |
-- >>> test nibbles "6f" (6, 15)
-- True
nibbles :: Syntax f => f (Word4, Word4)
nibbles = hexDigit <*> hexDigit

Given a pair of nibbles, I can easily compute the value of the corresponding byte. However, since our building blocks are going in both directions, we will also need a formula for splitting a byte into a pair of nibbles. This way, the parsing process will extract two nibbles and merge them into a byte, and the pretty-printing process will split the byte into two nibbles and will pretty-print each one.

import Control.Isomorphism.Partial.Prim
import Control.Isomorphism.Partial.Unsafe

iso :: (a -> b) -> (b -> a) -> Iso a b
iso f g = Iso (Just . f) (Just . g)

mergeNibbles :: Iso (Word4, Word4) Word8
mergeNibbles = iso (\(x,y) -> x * 16 + y) (`divMod` 16)

-- |
-- >>> test byte "6f" 111
-- True
byte :: Syntax f => f Word8
byte = mergeNibbles <$> nibbles

-- |
-- >>> test hexChar "\\x6f" 'o'
-- True
hexChar :: Syntax f => f Char
hexChar = iso chr ord <$> text "\\x" *> byte

invertible-syntax has a companion library called partial-isomorphisms for constructing such pairs of inverse functions. It is a combinator library in its own right, with many primitives and combinators ensuring that the resulting functions really are inverses of each other, but in this case I prefer to use the unsafe portion of the library, which allows me to specify an arbitrary pair of functions and to tell the library to trust me that those are inverses of each other.

Now that we have covered all the escape sequences, the last kind of character which could occur in a string is a non-escaped character.

-- |
-- >>> test otherChar "o" 'o'
-- True
otherChar :: Syntax f => f Char
otherChar = subset (`notElem` ['\\','\"']) <$> token

subset builds a partial isomorphism by taking the identity isomorphism which doesn't change the value in either direction, and restricting its domain to the values satisfying a predicate. In this case, we don't want backslash and double quote characters to be included in the domain of non-escaped characters, because those characters do need to be escaped.

So far I've been talking about functions going in two directions as if the two functions were true inverses of each other, but in fact as the name implies, partial isomorphisms are only partial inverses. The reason we don't need the functions to be exact inverses is that a given value may have several different, equally valid textual representations. For example, the character 'o' can be represented either as "o" or as "\x6f", and both encodings are valid. But when we convert those two representations to the single value 'o' and then back to a string, the function which converts 'o' into a string can only choose one answer! Suppose it chooses "o": the other value, "\x6f", demonstrates that the functions are not inverses, because applying one function to "\x6f" yields 'o' but applying the other function to 'o' does not yield back "\x6f".

Now that we have building blocks for every kind of character, we can combine them into one more complicated building block which knows how to parse and pretty-print any character.

-- |
-- >>> test char "o" 'o'
-- True
-- >>> head (parse char "\x6f")
-- 'o'
-- >>> print char 'o'
-- Just "o"
char :: Syntax f => f Char
char = oneOf [backslash, doubleQuote, otherChar, hexChar]

While parsing, escaped backslashes and double quotes will be parsed into backslashes and double quotes, non-escaped characters will be parsed into themselves, and escaped hexadecimal codes will be parsed into the corresponding ascii character. When pretty-printing, however, no characters will be printed in hexadecimal, because every character is either a backslash, a double quote, or in the domain of otherChar. So the pretty printer will never fall back to hexChar, and 'o' will be printed as "o", not "\x6f". We could of course have chosen otherwise by ordering the list of building blocks differently.

Now that we can handle characters, we're finally ready to describe how to handle an entire string.

-- |
-- >>> test string "\"( \\\" )\"" "( \" )"
-- True
-- >>> head (parse string "\"(\x20\\\"\x20)\"")
-- "( \" )"
-- >>> print string "( \" )"
-- Just "\"( \\\" )\""
string :: Syntax f => f String
string = between (text "\"") (text "\"") (many char)

This approach ended up requiring much more code than simply writing separate functions for parsing and pretty-printing strings. It is, however, much less error-prone: if we end up changing the way in which strings can be represented, for example by supporting more kinds of escape sequences, we will only have to modify the code once. In contrast, with two separate functions, it would be easy to forget to update the other function, or to accidentally modify the functions in a way which causes them to no longer be partial inverses.

Navigation

Friday, December 25, 2015

A whirlwind tour of Haskell, day 7

Day 7, functionally: bit fiddling, Map, and recursion

Today's puzzle is quite interesting, and will allow me to demonstrate several Haskell techniques! We're given the description of an electronic circuit consisting of a bunch of constants and bitwise operations, and we need to simulate it. The bitwise operations are fairly standard, here's what they look like in Haskell:

import Data.Word
import Data.Bits

opAND :: Word16 -> Word16 -> Word16
opAND = (.&.)

opOR :: Word16 -> Word16 -> Word16
opOR = (.|.)

opLSHIFT :: Word16 -> Int -> Word16
opLSHIFT = shiftL

opRSHIFT :: Word16 -> Int -> Word16
opRSHIFT = shiftR

opNOT :: Word16 -> Word16
opNOT = complement

Like before, I'll skip the parsing details and I'll represent the operations using a precise type.

type WireName = String

data Expr
  = Number Word16
  | Wire   WireName
  | AND    Expr Expr
  | OR     Expr Expr
  | LSHIFT Expr Int
  | RSHIFT Expr Int
  | NOT    Expr
  deriving (Show, Eq)

-- (wire, expression producing its value)
type Assignment = (WireName, Expr)

Since numbers and wires are allowed both at the top-level (for example 123 -> a and w -> a) and nested inside a single operation (for example 1 OR b -> a), I found it slightly easier to allow operations to be nested into arbitrarily deep expressions. This way, I don't have to repeat my number and wire logic in two separate places.

Anyway, the tricky part of this puzzle is that those expressions aren't meant to be evaluated in the order in which they are given, as some of the wire names are used as input much earlier than the expressions which give a value to those wires. Instead, we should take our list of assignments and store them in a Map, so we'll be able to easily get the expression which produces the value of each wire.

import Data.Map (Map)
import qualified Data.Map as Map

type Env = Map WireName Expr

mkEnv :: [Assignment] -> Env
mkEnv = Map.fromList

lookupEnv :: Env -> WireName -> Maybe Expr
lookupEnv = flip Map.lookup

-- throws an exception if the name is not found
lookupEnv' :: Env -> WireName -> Expr
lookupEnv' = (Map.!)

Now that I can lookup the expression for each wire, I can evaluate expressions using a recursive solution: if the expression is a number (the base case), I'm done, if the expression depends on another wire, I lookup its expression and evaluate it using a recursive call, and if it's an operation, I recursively evaluate its operands and perform the operation. Easy!

eval :: Env -> Expr -> Word16
eval _   (Number   n) = n
eval env (Wire  name) = eval env (lookupEnv' env name)
eval env (AND    x y) = opAND    (eval env x) (eval env y)
eval env (OR     x y) = opOR     (eval env x) (eval env y)
eval env (LSHIFT x i) = opLSHIFT (eval env x) i
eval env (RSHIFT x i) = opRSHIFT (eval env x) i
eval env (NOT    x  ) = opNOT    (eval env x)

Using this solution, we can easily evaluate each of the wires given in the sample circuit:

-- |
-- >>> :{
-- let assignments = [ ("d", AND (Wire "x") (Wire "y"))
--                   , ("e", OR  (Wire "x") (Wire "y"))
--                   , ("f", LSHIFT (Wire "x") 2)
--                   , ("g", RSHIFT (Wire "y") 2)
--                   , ("h", NOT (Wire "x"))
--                   , ("i", NOT (Wire "y"))
--                   , ("x", Number 123)
--                   , ("y", Number 456)
--                   ]
--     wireNames   = map fst assignments
--     wireValues  = map (day7 assignments) wireNames
-- in forM_ (zip wireNames wireValues) print
-- :}
-- ("d",72)
-- ("e",507)
-- ("f",492)
-- ("g",114)
-- ("h",65412)
-- ("i",65079)
-- ("x",123)
-- ("y",456)
day7 :: [Assignment] -> WireName -> Word16
day7 assignments name = eval env (lookupEnv' env name)
  where
    env :: Env
    env = mkEnv assignments

However, if we try it on the puzzle's larger circuit, the algorithm gets stuck! What happens is that many of the wires are used more than once, and yet are re-evaluated each time they are encountered. Consider the expression b AND c -> a. If operands b and c both end up depending on the same wire d, evaluating a will end up evaluating d's expression twice, and if a has to be evaluated twice, d will end up being evaluated four times. Since the number of evaluations doubles each time a new element is added to such a chain, it doesn't take many elements for the number of evaluations to become astronomical, causing our algorithm to get stuck re-evaluating the same expressions over and over.

Day 7, imperatively: caching

Clearly, we should cache the result of each wire in order to avoid evaluating the same expression more than once. Here is an imperative implementation of that idea: keep a map of the results computed so far, and each time we encounter a wire name, look inside the map to see if we already have a result for this wire. If we do, use it, otherwise recursively compute the answer and store it in the map.

type Cache = Map WireName Word16

lookupCache :: Cache -> WireName -> Maybe Word16
lookupCache = flip Map.lookup

lookupCache' :: Cache -> WireName -> Word16
lookupCache' = (Map.!)

eval :: Env -> Expr -> State Cache Word16
eval _   (Number   n) = return n
eval env (Wire  name) = do
    cache <- get
    case lookupCache cache name of
      Just x  -> return x
      Nothing -> do
          x <- eval env (lookupEnv' env name)
          modify (Map.insert name x)
          return x
eval env (AND    x y) = opAND    <$> eval env x <*> eval env y
eval env (OR     x y) = opOR     <$> eval env x <*> eval env y
eval env (LSHIFT x i) = opLSHIFT <$> eval env x <*> pure i
eval env (RSHIFT x i) = opRSHIFT <$> eval env x <*> pure i
eval env (NOT    x  ) = opNOT    <$> eval env x
Day 7, lazily: laziness-based caching and MemoTrie

Haskell has another way to cache results which is built into the language: its implementation of laziness.

In the previous days, I pointed out a few situations in which laziness improves the performance by skipping over the part of a pure computation whose result does not end up being used. If the result does end up being used, the computation must be performed. If the result ends up being used multiple times, the computation must also be performed, but only once, not multiple times. The way in which this is implemented is that the memory in which the result would normally be stored can point to either a thunk describing which computation needs to be forced to obtain the result, or it can point to the result itself if it has already been computed. In other words, laziness caches the results of computations so that each computation only has to be performed at most once.

Another aspect of laziness is that it allows us to define infinite data structures, by only constructing the portion of the data which ends up being needed, and also circular data structures, because by the time the structure loops back to the beginning, the beginning's thunk has already been expanded into a value.

-- |
-- >>> take 10 circular
-- [1,2,3,1,2,3,1,2,3,1]
circular :: [Int]
circular = 1 : 2 : 3 : circular

Here is a trickier example in which instead of directly looping back to the beginning, we examine the portion of the data structure which has been constructed so far and we produce a modified version.

-- |
-- >>> take 10 nats
-- [1,2,3,4,5,6,7,8,9,10]
nats :: [Int]
nats = 1 : map (+1) nats

Those two examples are both referring to earlier parts of the list, but it's also possible to refer to a later part of the list.

-- |
-- >>> backwards
-- [7,8,9,10]
backwards :: [Int]
backwards = [ (backwards !! 1) - 1
            , (backwards !! 2) - 1
            , (backwards !! 3) - 1
            , 10
            ]

Here is what happens while backwards is being printed. First, ((backwards !! 1) - 1)'s thunk is forced. This causes (backwards !! 2) - 1 to be forced, which in turn causes (backwards !! 3) - 1 to be forced as well. This yields the result 9, so the (backwards !! 3) - 1 thunk is replaced with the value 9, and similarly for 8 and 7. The first element 7 can finally be printed, and we move on to print the next elements, which have already been evaluated to 8, 9 and 10. As you can see, each thunk is only evaluated once, and if there are dependencies between the thunks, the dependencies are evaluated first.

I hope it is now clear how laziness can help us with today's puzzle: instead of writing an imperative computation to manually cache the results and traverse the dependencies, we can simply construct a data structure whose elements are defined in terms of each other. This way, evaluating any element of the data structure will automatically force its dependencies while caching the results.

day7 :: [Assignment] -> WireName -> Word16
day7 assignments name0 = lookupCache' cache name0
  where
    cache :: Cache
    cache = Map.fromList (map (second eval) assignments)
    
    eval :: Expr -> Word16
    eval (Number   n) = n
    eval (Wire  name) = lookupCache' cache name
    eval (AND    x y) = opAND    (eval x) (eval y)
    eval (OR     x y) = opOR     (eval x) (eval y)
    eval (LSHIFT x i) = opLSHIFT (eval x) i
    eval (RSHIFT x i) = opRSHIFT (eval x) i
    eval (NOT    x  ) = opNOT    (eval x)

Here I am constructing cache, a map from each wire name to its computed value, by applying eval to the corresponding expressions. eval is itself defined in terms of the cache, in which it looks up the values computed for the dependencies. We didn't have to write any special code to do so, but this cache lookup will either return immediately with the result if it has already been computed, or it will find and force a thunk, calling eval on the dependency and caching the result.

Another way to achieve the same result is to use a memoization library. Here I use MemoTrie to memoize the result of looking up and evaluating the expression corresponding to a wire name. This way, when evalExpr calls evalWire on a dependency, it will use the cached value if there is one. The implementation uses laziness under the hood.

import Data.MemoTrie

day7 :: [Assignment] -> WireName -> Word16
day7 assignments name0 = evalWire name0
  where
    env :: Env
    env = mkEnv assignments
    
    evalWire :: WireName -> Word16
    evalWire = memo (evalExpr . lookupEnv' env)
    
    evalExpr :: Expr -> Word16
    evalExpr (Number   n) = n
    evalExpr (Wire  name) = evalWire name
    evalExpr (AND    x y) = opAND    (evalExpr x) (evalExpr y)
    evalExpr (OR     x y) = opOR     (evalExpr x) (evalExpr y)
    evalExpr (LSHIFT x i) = opLSHIFT (evalExpr x) i
    evalExpr (RSHIFT x i) = opRSHIFT (evalExpr x) i
    evalExpr (NOT    x  ) = opNOT    (evalExpr x)

If you'd like to read more about using laziness for caching, I have written a post explaining when caches do and do not get reused between calls.

Day 7, meta-programmatically: Template Haskell

I know this post is getting quite long already, but I'd like to point out one last way in which the puzzle could be solved. Consider the following Haskell code:

d = opAND x y
e = opOR x y
f = opLSHIFT x 2
g = opRSHIFT y 2
h = opNOT x
i = opNOT y
x = 123
y = 456

Even though some of the above values are used before being defined, the thunk forcing mechanism makes sure that evaluating any of those values will first evaluate and cache its dependencies. So if we could generate similar Haskell code from the circuit description, we could efficiently obtain the value of any wire.

One way of generating code which is available in any language is to construct a string containing the code we want:

import Text.Printf

validName :: String -> String
validName "do" = "do'"
validName "id" = "id'"
validName "if" = "if'"
validName "in" = "in'"
validName name = name

exprS :: Expr -> String
exprS (Number   n) = show n
exprS (Wire  name) = validName name
exprS (AND    x y) = printf "opAND    %s %s" (exprS x) (exprS y)
exprS (OR     x y) = printf "opOR     %s %s" (exprS x) (exprS y)
exprS (LSHIFT x i) = printf "opLSHIFT %s %d" (exprS x) i
exprS (RSHIFT x i) = printf "opRSHIFT %s %d" (exprS x) i
exprS (NOT    x  ) = printf "opNOT    %s"    (exprS x)

-- |
-- >>> assignmentS ("g", RSHIFT (Wire "y") 2)
-- "g = opRSHIFT y 2"
assignmentS :: Assignment -> String
assignmentS (name, e) = printf "%s = %s" name (exprS e)

This works, but generating strings is quite error-prone, in the sense that the compiler is not going to prevent you from generating code which doesn't compile.

A more principled approach is to use Template Haskell. Instead of manipulating strings, we manipulate quasiquotations which fail at compile time if the quoted code does not parse.

{-# LANGUAGE TemplateHaskell #-}
import Language.Haskell.TH

validName :: String -> Name
validName "do" = mkName "do'"
validName "id" = mkName "id'"
validName "if" = mkName "if'"
validName "in" = mkName "in'"
validName name = mkName name

exprE :: Expr -> Q Exp
exprE (Number   n) = [| n |]
exprE (Wire  name) = varE (validName name)
exprE (AND    x y) = [| opAND    $(exprE x) $(exprE y) |]
exprE (OR     x y) = [| opOR     $(exprE x) $(exprE y) |]
exprE (LSHIFT x i) = [| opLSHIFT $(exprE x) i          |]
exprE (RSHIFT x i) = [| opRSHIFT $(exprE x) i          |]
exprE (NOT    x  ) = [| opNOT    $(exprE x)            |]

-- >>> let assignment = ("g", RSHIFT (Wire "y") 2)
-- >>> decls <- runQ $ assignmentD assignment
-- >>> ppr decls
-- g = opRSHIFT y 2
assignmentD :: Assignment -> Q [Dec]
assignmentD (name, e) = [d| $(var) = $(val) |]
  where
    var :: Q Pat
    var = varP (validName name)
    
    val :: Q Exp
    val = exprE e

For extra safety, there is also a variant of [| ... |] which detects type errors at compile time:

import Language.Haskell.TH.Syntax

exprE :: Expr -> Q (TExp Word16)
exprE (Number   n) = [|| n ||]
exprE (Wire  name) = unsafeTExpCoerce $ varE (validName name)
exprE (AND    x y) = [|| opAND    $$(exprE x) $$(exprE y) ||]
exprE (OR     x y) = [|| opOR     $$(exprE x) $$(exprE y) ||]
exprE (LSHIFT x i) = [|| opLSHIFT $$(exprE x) i           ||]
exprE (RSHIFT x i) = [|| opRSHIFT $$(exprE x) i           ||]
exprE (NOT    x  ) = [|| opNOT    $$(exprE x)             ||]

Since we're generating variable names dynamically from the wire names, Template Haskell cannot know at compile time whether those variables have the correct type, so we have to use unsafeTExpCoerce to tell it to trust us that this piece will work out.

To complete the puzzle, I write a small wrapper which loads the circuit description from a file and concatenates all the generated declarations:

circuit :: FilePath -> Q [Dec]
circuit filename = do
    Right assignments <- runIO
                       $ parseFromFile (many assignment) filename
    decls <- forM assignments assignmentD
    return (concat decls)

I can finally load this circuit into my main program (for technical reason, the definitions above must be in a different file from the code below) and access the variables it defines.

-- defines "a" and many other variables
$(circuit "input.txt")

day7 :: Word16
day7 = a
Navigation

Sunday, December 20, 2015

A whirlwind tour of Haskell, day 6

Day 6: precise types, arrays, and ST

This puzzle asks us to parse and execute a sequence of instructions. The instructions are represented as text strings, so in a dynamic language I would probably use regular expressions to determine which instruction I'm looking at, using capturing groups like the following to extract the instruction's parameters.

/^command1 ([0-9]+) ([0-9]+)$/

In Haskell, however, I prefer to define a data type describing the set of all possible instructions, like this:

data Rectangle = Rectangle (V2 Integer) (V2 Integer)
  deriving (Show, Eq)

data Operation = TurnOn | TurnOff | Toggle
  deriving (Show, Eq)

data Instruction = Instruction Operation Rectangle
  deriving (Show, Eq)

The above says that an Instruction consists of an Operation, which can either be TurnOn, TurnOff or Toggle, followed by a Rectangle consisting of two 2D coordinates (two opposite corners). I prefer this representation over a simple string because it is more precise, in the sense that the set of values of this type more closely ressembles the set of valid operations than the set of strings does. There are tons of strings such as "flubberdoodle" which do not represent a valid instruction at all, while with this precise representation, the worse I can do is a value like Instruction TurnOn (V2 0 1000) (V2 50 1050), which at least looks like an instruction, and is only invalid because those coordinates are outside the bounds of the grid specified by the puzzle.

The advantage of having a precise representation is in making sure that we handle all the possibilities. If I was implementing the puzzle's instructions using a function taking a string as input, I'd have to write something like this:

runInstruction :: String -> ...
runInstruction s
  | "turn on"  `isPrefixOf` s = ...
  | "turn off" `isPrefixOf` s = ...
  | "toggle"   `isPrefixOf` s = ...
  | otherwise                 = error "unrecognized instruction"

And then if a new instruction was added, it would be easy to forget to add a handler for it in runInstruction, especially if there were many other functions to update throughout the codebase. By using a more precise algebraic type, I can write runInstruction like this instead:

runInstruction :: Instruction -> ...
runInstruction (Instruction TurnOn  ...) = ...
runInstruction (Instruction TurnOff ...) = ...
runInstruction (Instruction Toggle  ...) = ...

And then if a new instruction gets added, I'll get a warning at compile-time instead of an "unrecognized instruction" error at runtime. Assuming you're compiling with warnings enabled, of course, which isn't the default with the .cabal file generated by cabal init. I always add the line ghc-options: -W -Wall to the end of the file it generates.

Anyway, the instructions are about flipping bits in a bit array, so let's talk about Haskell arrays. The array library defines many different kinds of arrays, and in this implementation I'll be using two different kinds, UArray (V2 Integer) Bool and STUArray s (V2 Integer) Bool.

import Data.Array.ST
import Data.Array.Unboxed

type Index = V2 Integer
type Grid     =   UArray   Index Bool
type STGrid s = STUArray s Index Bool

The Bool means that each cell of the array will contain a boolean, and the V2 Integer means that we'll be indexing into the array using 2D coordinates, so this is a 2D array. The U means "unboxed". Unboxed arrays are only defined for value types which can be tightly-packed; in our case for example, booleans can be efficiently packed as a sequence of bits.

The ST in STUArray comes from yet another monadic EDSL, called ST. If you were disappointed in Day 1 when I said that the State EDSL was limited to a single stateful variable, you'll be happy to hear that ST is a variant which allows you to allocate an arbitrary number of stateful references and to modify them in-place. This is also the EDSL we'll need to use in order to modify our array in-place.

The three operations requested by the puzzle are quite straightforward to implement: we simply read and write booleans to the array at the given index. The unboxed array implementation takes care of translating those boolean reads and writes into the corresponding bit-fiddling shifts and masks which will touch the correct bits.

import Control.Monad.ST

turnOn :: STGrid s -> Index -> ST s ()
turnOn g i = writeArray g i True

turnOff :: STGrid s -> Index -> ST s ()
turnOff g i = writeArray g i False

toggle :: STGrid s -> Index -> ST s ()
toggle g i = do
    b <- readArray g i
    writeArray g i (not b)

Each puzzle instruction requires executing one of our three operations on all the cells inside a particular rectangular area. This is straightforward as well, using the range method. This method comes from a type class named Ix, short for "index", because a type needs to have an Ix instance in order for its values to be used as indices into an array. The Ix instance for Integer, used for 1D arrays, has a range method which simply lists all the integers between two bounds. The Ix instance for V2 Integer, used for 2D arrays, has a range method which enumerates all the 2D coordinates between the corners of a rectangle, which is exactly what we want.

Since ST implements the Monad type class, we can use our trusty forM_ to execute our chosen instruction on every index in the resulting list of indices.

listIndices :: Rectangle -> [Index]
listIndices (Rectangle c1 c2) = range (c1,c2)

runInstruction :: STGrid s -> Instruction -> ST s ()
runInstruction g = go
  where
    go (Instruction TurnOn  r) = forM_ (listIndices r) (turnOn  g)
    go (Instruction TurnOff r) = forM_ (listIndices r) (turnOff g)
    go (Instruction Toggle  r) = forM_ (listIndices r) (toggle  g)

Now that we can run a single instruction, we can run the entire list of instructions in order to obtain the final light pattern. To do this, I allocate a new array of the dimensions described in the puzzle, I execute each of the instructions, and I returned a frozen copy of the STUArray. This converts the STUArray, which cannot outlive the ST computation in which it was created, into an immutable UArray, which can live outside of ST but can no longer be modified in-place.

runInstructions :: [Instruction] -> Grid
runInstructions instructions = runST $ do
    g <- newArray (V2 0 0, V2 999 999) False
    forM_ instructions (runInstruction g)
    freeze g

Finally, to solve the puzzle, I parse the instruction strings into a list of precisely-typed Instructions, I run those instructions to get a light pattern, I enumerate the bits, keep those which are True, and count how many there were.

day6 = parseInstructions
   >>> runInstructions
   >>> elems
   >>> filter id
   >>> length

I've omitted the code for parsing the instructions, it's an important topic but I'll keep it for later. I've also omitted Day 6.5 because it is too similar to Day 6.

Navigation

Saturday, December 19, 2015

A whirlwind tour of Haskell, day 5

Day 5: infix notation and list comprehensions

This puzzle lists a bunch of nonsense conditions on the characters of a string, and we must check whether the string satisfies all of them. This should be quite easy in any language, so it doesn't give me the opportunity to demonstrate another powerful Haskell library. Instead, I think I will try to make my implementation as readable as possible:

-- |
-- >>> numberOfTruePropositions [2+2 == 4, 3+3 == 10, 2*5 == 10]
-- 2
numberOfTruePropositions :: [Bool] -> Int
numberOfTruePropositions = filter (== True)
                       >>> length

-- |
-- >>> 2 `orMoreAreTrue` [True,True,False]
-- True
-- >>> 3 `orMoreAreTrue` [True,True,False]
-- False
orMoreAreTrue :: Int -> [Bool] -> Bool
orMoreAreTrue n = numberOfTruePropositions
              >>> (>= n)

-- |
-- >>> 2 `atMostAreTrue` [True,True,False]
-- True
-- >>> 1 `atMostAreTrue` [True,True,False]
-- False
atMostAreTrue :: Int -> [Bool] -> Bool
atMostAreTrue n = numberOfTruePropositions
              >>> (<= n)

-- |
-- >>> isVowel 'a'
-- True
-- >>> isVowel 'b'
-- False
isVowel :: Char -> Bool
isVowel c = c `elem` "aeiou"

-- |
-- >>> doubledLetter 'a'
-- "aa"
doubledLetter :: Char -> String
doubledLetter c = printf "%c%c" c c

-- |
-- >>> atLeastThreeLettersAreVowels "aaa"
-- True
-- >>> atLeastThreeLettersAreVowels "abc"
-- False
atLeastThreeLettersAreVowels :: String -> Bool
atLeastThreeLettersAreVowels s =
    3 `orMoreAreTrue` [ isVowel c
                      | c <- s                                 
                      ]                                        

-- |
-- >>> hasSomeDoubledLetter "aaa"
-- True
-- >>> hasSomeDoubledLetter "aba"
-- False
hasSomeDoubledLetter :: String -> Bool
hasSomeDoubledLetter s =
    1 `orMoreAreTrue` [ doubledLetter c `isInfixOf` s
                      | c <- ['a'..'z']
                      ]

-- |
-- >>> hasNoForbiddenSubstrings "aaa"
-- True
-- >>> hasNoForbiddenSubstrings "abc"
-- False
hasNoForbiddenSubstrings :: String -> Bool
hasNoForbiddenSubstrings s =
    0 `atMostAreTrue` [ cc `isInfixOf` s
                      | cc <- ["ab", "cd", "pq", "xy"]
                      ]

-- |
-- >>> day5 "ugknbfddgicrmopn"
-- True
-- >>> day5 "aaa"
-- True
-- >>> day5 "jchzalrnumimnmhp"
-- False
-- >>> day5 "haegwjzuvuyypxyu"
-- False
-- >>> day5 "dvszwmarrgswjxmb"
-- False
day5 s = atLeastThreeLettersAreVowels s
      && hasSomeDoubledLetter s
      && hasNoForbiddenSubstrings s

To make the implementation readable, I defined many helper functions with long meaningful names, I gave examples clarifying their meaning, I used backticks to turn a binary function into an infix operator, making the code closer to an English sentence, and I used list comprehension syntax ([... | x <- [1,2,3]]) to define a list of similar expressions. Since the above is supposed to be extra readable, I guess I won't bother explaining it further.

Instead, I'll give a few more examples of list comprehensions. Here's how to try every combination of x and y:

> [x + y | x <- [100,2000]
         , y <- [4,5,6]
         ]
[104,105,106,2004,2005,2006]

Here's an easy way to generate all the triples of distinct numbers between 1 and 4:

>  [(x,y,z) | x <- [1..4]
            , y <- [x+1..4]
            , z <- [y+1..4]]
[(1,2,3),(1,2,4),(1,3,4),(2,3,4)]

And here's a quick way to filter the results:

> [(x,y,z) | x <- [1..10]
           , y <- [x..10]
           , z <- [y..10]
           , x*x + y*y == z*z
           ]
[(3,4,5),(6,8,10)]
Day 5.5: non-determinism

This version of the puzzle is much harder, because instead of having to check whether particular substrings do or do not appear in the input, we have to find substrings which repeat later in the string. This is harder because whether the substring occurs later in the string depends both on the chosen substring and on the position in the original string in which the first copy of the substring appears.

The easiest way to solve this puzzle is probably to use back references in a perl-compatible regular expression. That's not very Haskell-specific, so instead, here's an approach which is more general, in the sense that it can find patterns even if the question has nothing to do with strings: non-determinism.

-- |
-- >>> repeatingPairs "aabbaabb"
-- [('a','a'),('a','b'),('b','b')]
repeatingPairs :: String -> [(Char,Char)]
repeatingPairs s = do
    i <- [0 .. length s - 2]
    (c1:c2:s') <- return (drop i s)
    let pair = printf "%c%c" c1 c2
    guard (pair `isInfixOf` s')
    return (c1, c2)

Here I am non-deterministically choosing the index i at which the desired pair of characters begins, and then I use guard to retro-actively eliminate the choices of i for which pair does not occur in the remainder of the string. You can think of it as trying all the possibilities at once, like a non-deterministic finite automaton. Except computers are deterministic, so while we are describing a non-deterministic computation, that computation is implemented by trying one possibility at a time and backtracking whenever a guard condition fails.

Here's another example in which guard eliminates the paths in which the chosen letter doesn't reoccur two characters later.

-- |
-- >>> repeatingLetters "aba-bcd-def"
-- "ad"
repeatingLetters :: String -> [Char]
repeatingLetters s = do
    i <- [0 .. length s - 1]
    (c:_:c':_) <- return (drop i s)
    guard (c == c')
    return c

The rest of the puzzle is easy: to check whether one of the patterns is present in the input string, list all the matching patterns and check if the list is non-empty. Thanks to laziness, the backtracking algorithm lists all the possibilities when we ask for them, but when we only ask if the list is non-empty, it behaves more efficiently by stopping after the first successful pattern is found.

-- |
-- >>> hasRepeatingPair "xyxy"
-- True
-- >>> hasRepeatingPair "aaa"
-- False
hasRepeatingPair :: String -> Bool
hasRepeatingPair = not . null . repeatingPairs

-- |
-- >>> hasRepeatingLetter "xyx"
-- True
-- >>> hasRepeatingLetter "xyz"
-- False
hasRepeatingLetter :: String -> Bool
hasRepeatingLetter = not . null . repeatingLetters

-- |
-- >>> day5_5 "qjhvhtzxzqqjkmpb"
-- True
-- >>> day5_5 "xxyxx"
-- True
-- >>> day5_5 "uurcxstgmygtbstg"
-- False
-- >>> day5_5 "ieodomkazucvgmuy"
-- False
day5_5 s = hasRepeatingPair s
        && hasRepeatingLetter s
Navigation

A whirlwind tour of Haskell, day 4

Day 4: process and conduit

For the next puzzle, we need to compute a sequence of md5 hashes and to return the first one which satisfies a given property. Haskell does have a few libraries implementing md5 hashing, but using such a function wouldn't teach us anything new about Haskell. Instead, let's delegate the task to an external program.

$ echo -n "abcdef609043" | md5sum
000001dbbfa3a5c83a2d506429c7b00e

The md5sum program, which has nothing to do with Haskell, computes the md5 hash of its input. Let's run it from Haskell, using a command from the process library.

import System.Process

-- |
-- >>> md5 "abcdef609043"
-- ("abcdef609043","000001dbbfa3a5c83a2d506429c7b00e")
md5 :: String -> IO (String, String)
md5 input = do
    [output] <- lines <$> readProcess "md5sum" [] input
    return (input, output)

We will need to use this to compute the md5 hash of "abcdef1", "abcdef2", "abcdef3", and so on until we find an input string whose hash starts with "00000". Let's define the infinite list of all such input strings:

import Text.Printf

-- |
-- >>> take 3 ints
-- [1,2,3]
ints :: [Int]
ints = [1..]

-- |
-- >>> take 3 (inputs "abcdef")
-- ["abcdef1","abcdef2","abcdef3"]
inputs :: String -> [String]
inputs prefix = map (printf "%s%d" prefix) ints

I'd like to map our md5 function over this infinite list in order to obtain an infinite list of hashes, but Haskell thinks it's a mistake to do so:

-- Expected type: String -> (String,String)
--   Actual type: String -> IO (String,String)
-- In the first argument of ‘map’, namely ‘md5’
outputs :: [(String,String)]
outputs = map md5 inputs

The reason this is a mistake is because md5 performs I/O operations, as indicated by its output type IO (String,String). Exploiting Haskell's laziness to skip over some pure mathematical transformations is fine, but skipping over IO operations it a completely different thing. IO operations are usually executed for their side-effects, and so we probably don't want Haskell to optimize them away.

The fact that laziness doesn't work on IO computations is thus a good thing, but then how are we going to describe the rest of the algorithm? We could use an imperative approach, like we did in Day 1.5, by looping over the possible inputs and breaking out of the loop upon some condition. Or, we could learn something new by using a streaming library, in which we compose side-effecting computations along a stream in much the same way we've been composing mathematical functions with (>>>). Each piece decides when to perform side-effects, when to consume values from upstream, when to send some values downstream, and when to abort the entire streaming computation.

I've already explored the pipes streaming library in the past, so this time I will use the conduit library instead.

import Data.Maybe
import Data.Conduit
import qualified Data.Conduit.List as Conduit

-- |
-- >>> runConduit (day4 "abcdef")
-- ("abcdef3337","000a63ec2eecacd28b2a6592906fea34")
day4 :: String -> ConduitM input output IO (String, String)
day4 prefix = Conduit.sourceList (inputs prefix)
          =$= Conduit.mapM md5
          =$= Conduit.filter (snd >>> take 3 >>> (== "000"))
          =$= (fromJust <$> Conduit.head)

At the top of my stream, I use a source which will trickle each of the candidate input strings down the stream. Downstream, I execute our md5 computation on each input string as it flows down. Downstream from that, I filter the stream of results to only keep the ones which satisfy the criterion. The actual criterion is that the hash should begin with 5 zeroes, but with the overhead of spawning an external program for hundreds of thousands of candidate inputs, on my computer it takes almost 15 minutes to find a hash with 5 zeroes, so if you only want to check that the code runs and behaves as desired, save yourself some time and stop after 3 zeroes as in the code above. Despite the less restrictive predicate, very few values will pass this filter.

Once a value eventually makes it trough, Conduit.head aborts the computation using that value as the result. I added the fromJust bit because Conduit.head can either return Just x if x is the first value which makes it through the filter, or it can return Nothing if some other part of the stream aborts the computation before a value can reach Conduit.head. Since Conduit.head is the only piece in my stream which can abort the computation, I know that this will not happen, so I tell the compiler to assume that the output will have the form Just x for some x, and that the final result should be x.

Day 4.5 is omitted, because it is too similar to the version we just solved.

Navigation

A whirlwind tour of Haskell, day 3

Day 3: linear and containers

In this puzzle, we need to move a cursor in 2D space and count the number of distinct coordinates we visited.

import qualified Data.Set as Set
import Linear.V2

direction :: Char -> V2 Int
direction '<' = V2 (-1) 0
direction '>' = V2 1 0
direction '^' = V2 0 1
direction 'v' = V2 0 (-1)
direction _   = V2 0 0

-- |
-- >>> day3 ">"
-- 2
-- >>> day3 "^>v<"
-- 4
-- >>> day3 "^v^v^v^v^v"
-- 2
day3 = map direction
   >>> scanl (+) 0
   >>> Set.fromList
   >>> length

I use V2 from the linear library to represent the 2D coordinates, because its vector space addition allows me to add both coordinates at once. To get the list of visited coordinates, I use scanl (+) 0 again: it's still a running sum, but this time we're adding directional unit vectors to a coordinate which starts at "zero", the origin of the 2D plane. This works because (+) and 0 are obtained via methods of the Num type class, and Haskell infers that I want to use the Num instance for V2 Int.

Interestingly, this is a parameterized instance, which means that the Num instance for V2 Int delegates part of its implementation to the Num instance for Int, and similarly for V2 Float, etc. This mechanism allows type class instances to be automatically constructed for complex composed types which have never been seen before, without requiring the programmer to write a type class instance for this particular composition of types. For example, our MyEDSL from earlier was composed of four layers of monad transformers, each of which defined their own Monad instance, but we did not have to define a Monad instance for MyEDSL in order to be able to use forM_ and when.

Next, to obtain distinct coordinates from the list of all coordinates, I convert the list into a set, defined by the containers library. There is also a function called nub in the standard library which eliminates duplicates from a list, but for technical reasons nub executes in quadratic time while Set.fromList runs in O(n*log n). The containers library also has an even more efficient IntSet which can only contain Ints, but I rarely use it because Set Int is more versatile (I can easily switch to a Set Float if I need to) and it's fast enough for me.

Finally, I compute the number of elements in the set using length. It's the same length we used earlier to compute the size of a list because, you guessed it, length is part of yet another type class. It's a type class called Foldable, and its methods include or can be used to derive length, sum, forM_, and other functions which require iterating over the elements of a container without modifying them.

Day 3.5: infinite lists, lens, and constraints

For the second part of the puzzle, we now need to move two cursors, and the ASCII instructions alternate between giving a direction for the first cursor and giving a direction for the second. We need to count the number of distinct coordinates which at least one of the cursors has visited.

{-# LANGUAGE RankNTypes #-}

import Control.Lens
import Data.Set.Lens
import Data.List

day3_5 = map direction
     >>> zip [0..]
     >>> partition (fst >>> even)
     >>> over (both.mapped) snd
     >>> over both (scanl (+) 0)
     >>> setOf (both.folded)
     >>> length

The sequence of functions is getting a bit long, so let's split it into smaller helper functions.

-- |
-- >>> fromAlternating [1..6]
-- ([1,3,5],[2,4,6])
-- >>> fromAlternating "a1b2c3"
-- ("abc","123")
fromAlternating :: [a] -> ([a], [a])
fromAlternating = zip [0..]
              >>> partition (fst >>> even)
              >>> over (both.mapped) snd

This first helper splits the lists of instructions into two sublists, one for each cursor. To do that, we first number the elements of the list from 0 to n, then we partition the list into two sublists according to whether the element number is even or odd, and finally, we drop the element numbers and keep only the elements.

zip [0..] is a common Haskell idiom for numbering the elements of a list. The way it works is by creating the infinite list [0,1,2,...] and using zip to pair each element of this list with each element of the original list [x,y,z], resulting in the list of pairs [(0,x), (1,y), (2,z)]. The resulting list has only three elements, not an inifinite number, because zip stops after the shortest list is exhausted. Infinite structures are very common in Haskell, because they allow us to focus on the contents and not on the boundary conditions: in this case, we focus on the fact that the list contains an increasing list of integers, and we don't bother specifying that we need as many integers as there are elements in our original list. Haskell's laziness makes sure that it will not get stuck trying to allocate an infinite list, and will instead only allocate as much as is necessary to compute the resulting finite list of pairs.

partition is a function from the standard library which partitions a list into two sublists according to a predicate: one sublist contains the elements which satisfy the predicate, and the other sublist contains the elements which don't. In this case, our predicate is that the first element of the pair, that is, the element number, should be even. This causes the even-numbered pairs to end up in the first list, and the odd-numbered pairs to end up in the second.

As this point we have two lists of pairs, with each pair containing a number and an element. We'd rather have lists of elements, because the numbers are just an intermediate annotation we used for the partitioning. So we'd like to apply snd, the function which only keeps the second part of a pair, to all the elements of both lists. I could have writen (\(xs,ys) -> (map snd xs, map snd ys)), it would have done the job, but I'd rather to take the opportunity to demonstrate the lens library.

The lens library deals with nested structures, in this case a pair of lists of elements. I think the name comes from the fact that a real-world lens allows you to focus light on a smaller area, and that you can compose multiple lenses to focus the light on an even smaller area. Similarly, a Lens from the lens library allows you to manipulate a structure by focusing on manipulating a smaller part of that nested structure, and Lenses can be composed to allow you to focus on a deeply-nested element hidden deep inside a nested structure.

The lens library also offers a variety of other "optics", which can focus on something other than a single element. In this case, I am constructing a Setter, which allows multiple elements in a nested structure to be modified at the same time. I am constructing it by composing both, a setter which focuses on both halves of a pair simultaneously, with mapped, a setter which focuses on all the elements of a list. The net result is that I am focusing on all the elements of both of my lists, that is, I am focusing on all the pairs on which I want to apply snd.

-- |
-- >>> runningSum [1,2,3]
-- [0,1,3,6]
-- >>> runningSum (map direction "^>>v")
-- [V2 0 0,V2 0 1,V2 1 1,V2 2 1,V2 2 0]
runningSum :: Num a => [a] -> [a]
runningSum = scanl (+) 0

This second helper is our good friend scanl (+) 0, which we have used to produce running sums of integers and running sums of 2D coordinates. The Num a constraint in the type signature indicates that while this helper function can indeed work on list of elements of various types, it will only work if that element type has a Num instance.

-- |
-- >>> distinctCountOf folded [1,2,3,2,2,3]
-- 3
-- >>> distinctCountOf both (1,2)
-- 2
-- >>> distinctCountOf both (1,1)
-- 1
distinctCountOf :: Ord a => Fold s a -> s -> Int
distinctCountOf elems = setOf elems
                    >>> length

Our final helper is a generalization of the Set.fromList >>> length technique we've been using so far to count the distinct elements of a list. This time, however, the elements we want to count are distributed among two separate lists, so a simple Set.fromList is not going to work. Instead, I'm using setOf, a function from lens which takes a Fold and constructs a Set out of the elements on which the Fold focuses. A Fold is yet another kind of optic, one which also focuses on multiple elements simultaneously, but instead of modifying all of them at once like a Setter, it examines them using methods from Foldable.

-- |
-- >>> day3_5 "^v"
-- 3
-- >>> day3_5 "^>v<"
-- 3
-- >>> day3_5 "^v^v^v^v^v"
-- 11
day3_5 = map direction
     >>> fromAlternating
     >>> over both runningSum
     >>> distinctCountOf (both.folded)

Finally, we compose our helpers into one sophisticated function which solves the puzzle: we parse the directions into unit vectors, we split the alternating directions into one list for each of the two cursors, we list the visited coordinates for each, and we count how many distinct coordinates have been visited overall.

Navigation

A whirlwind tour of Haskell, day 2

Day 2: extra, type inference, and type classes

The next puzzle involves computing the area of the faces of a box. The equation to compute the area is provided, so the only slight complication is that the area of the smallest face needs to be added to the final answer.

import Data.List.Extra

-- |
-- >>> day2 "2x3x4"
-- 58
-- >>> day2 "1x1x10"
-- 43
day2 = wordsBy (== 'x')
   >>> map read
   >>> (\[l,w,h] -> [l*w, w*h, h*l])
   >>> sort
   >>> (\[x,y,z] -> 3*x + 2*y + 2*z)

The boxes are described by strings of the form "LxWxH", so I begin by splitting the string into three pieces using 'x' as the delimiter. To do this, I use wordsBy, a version of words which splits a string using a custom predicate to specify the delimiter characters instead of using whitespace like words does. The function words is in the standard library, but for some reason wordsBy isn't, it's provided by the extra library.

Next, I convert the resulting list of strings into a list of numbers by parsing each one using read. Notice that read has a rather generic name: why not parseInt like in other languages? Well, it turns out that read is not limited to parsing numbers. It does not look at the string it is parsing to figure out the type of the value it is parsing either. Instead, Haskell uses type inference to figure out that since we're about to multiply the parsed values together, we must want to parse the strings into numbers, and so it uses the version of read which behaves like parseInt. There is one such version of read for each parsable type, and Haskell uses the type of the expression in which read is used to figure out which implementation to use. This mechanism is called type classes: there is a type class called Read which defines read and a few other methods related to parsing, and each parsable type defines a Read instance implementing read and the other methods for this particular type.

As an aside, the Monad interface I have briefly mentioned earlier is also a type class, with one instance per EDSL. In the case of Monad, instances need to define two methods: one which constructs a computation that returns immediately without causing any side-effect (the side-effects we've seen so far include things like incrementing a counter and aborting the computation), and one which sequences two computations in such a way that the second computation can make use of the result computed by the first computation.

Anyway, once I have a list of numbers, the rest of the algorithm is straightforward: I compute the areas of the sides, I sort them so that I know which one has the smallest area, and I compute the sum of the areas, adding the smallest area 3 times instead of 2. I don't need to explain lambdas, do I? They're originally from functional languages, but all the mainstream languages have them too nowadays, same thing for higher-order functions like map and filter. I'm glad they did, but they shouldn't stop there: there is still more good stuff to copy :)

The full puzzle requires computing the sum of the answers for all the boxes, which is trivial and thus omitted. Day 2.5 is omitted as well, because my solution is almost the same as for Day 2.

Navigation
  • About this series
  • Previous day: monads, monad transformers, pattern-matching,
    composition, laziness, and higher-order functions
  • Next day: 2D vectors, sets, infinite lists, lenses, and constraints