Thursday, June 18, 2015

Will it memoize?

In this post, I will try several variants of a simple memoizing Haskell program and test whether the program actually memoizes or not. This will give us a better grasp on sharing.
Fibonacci
If you've been studying Haskell for any length of time, you have probably already stumbled upon the following unusual definition of the Fibonacci sequence.
fibs = 1 : 1 : zipWith (+) fibs (tail fibs)
If I print the fourth element of this list, how many additions will be evaluated? What if I then print the fifth element? Would you believe me if I told you that the answer depends on whether the NoMonomorphismRestriction extension is enabled or not? The answer also depends on various other details, which we will explore in this post.
NoisyAdd
In order to observe the additions as they are being evaluated, let's create a variant which prints a message to the console when it is called.
import Debug.Trace
noisyAdd x y = trace "adding" (x + y)

fibs = 1 : 1 : zipWith noisyAdd fibs (tail fibs)
fib n = fibs !! n
So, how many additions when I print the fourth element?
> fib 4
adding
adding
adding
5
Three, because that's the number of entries in [2..4], and at each of those indices a single addition is performed to compute the next number out of the two previous numbers.

How many additions when I then print the fifth?
> fib 5
adding
8
Only one! There are four entries in [2..5], but entries 2 through 4 have already been evaluated during our previous query. Implementing fib via a list of all possible results has effectively memoized our function.

Inside a where clause
The fact that fib is using a list for memoization purposes is an implementation detail. Let's hide the list inside the function's where clause.
fib n = fibs !! n
  where
    fibs = 1 : 1 : zipWith noisyAdd fibs (tail fibs)
Will it memoize?
> fib 4
adding
adding
adding
5
> fib 5
adding
adding
adding
adding
8
It did not! The definitions in the where clause are local to the body of the function, which itself only exists once the argument n has been given. For this reason, each time fib is called, a new copy of fibs is allocated and the previously-memoized values are forgotten.

Consider the following variant:
fib = \n -> fibs !! n
  where
    fibs = 1 : 1 : zipWith noisyAdd fibs (tail fibs)
Will it memoize?
> fib 4
adding
adding
adding
5
> fib 5
adding
8
It did! This time, the where block does not have access to the n argument, so it does not need to be instantiated on every call. As a result, all the calls to fib are backed by the same list, and memoization succeeds.

NoMonomorphismRestriction
Let's enable the NoMonomorphismRestriction extension.
{-# LANGUAGE NoMonomorphismRestriction #-}

fib = \n -> fibs !! n
  where
    fibs = 1 : 1 : zipWith noisyAdd fibs (tail fibs)
Adding GHC extensions allows extra non-standard features, but our existing standards-conforming code should work the same as before, shouldn't it? In particular, it should still memoize, right?
> fib 4
adding
adding
adding
5
*Main> fib 5
adding
adding
adding
adding
8
It did not! Enabling NoMonomorphismRestriction changed the type which GHC inferred for our definitions. With the extension turned on, GHC infers polymorphic types:
{-# LANGUAGE ScopedTypeVariables #-}

fib :: forall a. Num a => Int -> a
fib = \n -> fibs !! n
  where
    fibs :: [a]
    fibs = 1 : 1 : zipWith noisyAdd fibs (tail fibs)
The Num a constraint is implemented via an implicit dictionary argument, which leads to the same non-memoizing behavior we had earlier when the n argument was in scope inside the where block. Here, Num a is clearly in scope as well, otherwise fibs would not be able to add its elements together.

Without NoMonomorphismRestriction, GHC infers monomorphic types:
fib :: Int -> Integer
fib = \n -> fibs !! n
  where
    fibs :: [Integer]
    fibs = 1 : 1 : zipWith noisyAdd fibs (tail fibs)
This time the global Num Integer instance is used, not an abstract Num a which could change from call to call. This allows the same fibs to be used on each call, which in turn allows the memoized values to be retained from call to call.

Universal quantifiers

In the type signature fib :: forall a. Num a => Int -> a, there are three arguments of different kinds: the a is an implicit type argument, the Num a is an implicit dictionary argument, and the Int is an ordinary argument. We have seen that dictionary arguments and ordinary arguments both introduce a new scope and thereby hinder memoization. Is it also the case for type arguments?

To figure out the answer, let's construct a variant of fib which always adds integers and therefore doesn't need a dictionary argument, and also carries around a spurious polymorphic value on which no additions are performed. This way, we'll have a type argument but no dictionary argument.

fib :: forall a. Int -> a -> (Integer, a)
fib = pairWithFib
  where
    pairWithFib :: Int -> a -> (Integer, a)
    pairWithFib n x = (fibs !! n, x)
    
    fibs :: [Integer]
    fibs = 1 : 1 : zipWith noisyAdd fibs (tail fibs)
Will it memoize?
> fib 4 "foo"
adding things
adding things
adding things
(5,"foo")
*Main> fib 5 [42.0]
adding things
(8,[42.0])

It did! Types get erased, they do not exist at runtime. In particular, instantiating a to String and later to [Double] does not involve passing around some runtime representation of String or [Double], and it does not create a new copy of the where block.

It's a bit surprising, because this is a situation in which the evaluation semantics do not match up with the typing semantics. According to the type system, the pairWithFib in which a has been instantiated to String is not the same pairWithFib as the one in which a has been instantiated to [Double], as those two pairWithFib instances have different types and thus cannot be interchanged. Yet at runtime, those two instances of pairWithFib are represented by the same identical value. The trick is that in the absence of any type class constraint, pairWithFib has no way to distinguish between the different types it is supposed to be instantiated at, so it is allowed to behave the same in all cases. In particular, it is allowed to be the same identical value in all cases.

Sharing

In the previous paragraph, I was careful to use the phrase "same identical value" as opposed to simply "the same value". By "identical", I mean that the two values have the same identity, which in turn means that they occupy the same space in memory.

In object-oriented languages, the concept of identity is very important. Suppose you hold references to two objects which happen to be equal to each other, in the sense that their corresponding fields are pairwise equal. Despite the fact that the objects are ostensibly equal, which of those two references you pass to a third party method matters because that third party might mutate one of the object's fields. If that happens, all the other references to this same identity, throughout your program, will also see the mutated field, while the references to other identities will not. And if you had passed the other reference, a different set of references, those which point to the other identity, will see their fields mutated. It's just normal pointer indirection, but it's something you always have to keep in mind in those languages, in order to avoid a family of problems known as aliasing issues.

In Haskell's immutable fragment, there are no aliasing issues, in the sense that calling a pure function on equal values will always yield equal results, regardless of the identity of those values. Nevertheless, the identity of those values does have its importance: identities do not impact correctness, but they do impact performance. Under some circumstances, excessive sharing can lead to poor performance, whereas in the case of memoization, sharing can increase performance by minimizing duplicated work.

The general rule explaining our results in all the previous sections is that memoization only happens if the data structure which holds the memoized values is shared between calls. To demonstrate this, our last example plays with sharing by specifying which copy of the fibs data structure is used to memoize which copy of the fib function.

mkFibs :: forall a. Num a => [a]
mkFibs = fibs
  where
    fibs :: [a]
    fibs = 1 : 1 : zipWith noisyAdd fibs (tail fibs)

mkFib :: forall a. Num a => [a] -> Int -> a
mkFib = (!!)
Let's begin by constructing two copies of the fibs data structure:
fibsA :: [Integer]
fibsA = mkFibs

fibsB :: [Integer]
fibsB = mkFibs
And then two fib functions for each of the two copies of fibs:
fibA1 :: Int -> Integer
fibA1 = mkFib fibsA

fibA2 :: Int -> Integer
fibA2 = mkFib fibsA

fibB1 :: Int -> Integer
fibB1 = mkFib fibsB

fibB2 :: Int -> Integer
fibB2 = mkFib fibsB
"Will it memoize" is now the wrong question to ask. Each function memoizes its own results, but which functions share their memoized results with each other?
> fibA1 4
adding things
adding things
adding things
5
> fibA2 5
adding things
8
> fibB1 4
adding things
adding things
adding things
5
> fibB2 5
adding things
8

As expected, functions which were created from the same copy of fibs share their results with each other.

I hope that this last example makes it clear that sharing is something you can control. You can write most of your Haskell code without worrying about sharing, and then in the few places in which sharing affects the performance, you can explicitly thread your shared data structures to the places where you need them.

2015-06-20 update: as /u/pycube correctly points out, the above technique only works when you want to make sure that something is shared. Preventing sharing is much harder, because optimizations tend to increase sharing. With -O2, for example, all the examples in this post memoize.

Sunday, June 14, 2015

Querying type specializations

Obtaining the most general type of a Haskell expression is easy. But how do you figure out the more specialized types which your expression also has?

Diagrams

Let's draw a simple balloon using the diagrams library.

import Diagrams.Prelude
import Diagrams.Backend.Cairo.CmdLine

diagram :: Diagram B
diagram = circle 1
            ===
          vrule 1

I want to make the length of the string configurable.

{-# LANGUAGE FlexibleContexts, TypeFamilies #-}

balloon n = circle 1
        === vrule n

diagram :: Diagram B
diagram = balloon 2

That was easy, but I don't feel comfortable leaving balloon without a type signature. I'm a Haskell programmer, I need types in order to reason about my programs!

The most general type

With the proper flags, GHC warns about the fact that my top-level binding doesn't have a type signature and shows me which type it has inferred for it. I can also easily obtain this information via ghci:

> :type balloon
balloon
  :: (RealFloat (N a), Semigroup a, Transformable a, Juxtaposable a,
      TrailLike a, V a ~ V2)
  => N a -> a

Oh my, that's a large context! At least I can see why GHC wanted me to add FlexibleContexts (there is a constraint on N a, which isn't a type variable) and TypeFamilies (V a can evaluate to V2, which means V is probably a type-level function, also known as a type family).

I'd like to use a simpler type if possible. Maybe I can instantiate a to a concrete type?

Type defaulting

Since GHC 7.8, one easy way to figure out the concrete type at which an expression is instantiated is to pass it as an argument to a type-hole.

-- Found hole ‘_typeof’ with type: Integer -> Double
diagram :: Diagram B
diagram = balloon (_typeof 2)

The most general type of 2 is Num a => a, which can get specialized to Integer or Double or any of a large number of types having a Num instance. The _typeof type hole does not further constrain its argument's type, but the type defaulting rules cause a to be specialized to Integer. Then, balloon's complicated type does constrain its argument's type, and when combined with the requirement to produce a Diagram B, balloon's argument must apparently be a Double. At least, that's what the type-hole's inferred type Integer -> Double seems to be telling us.

Or could it be that balloon accepts a more general type than Double, but the defaulting rules simplify this type to Double before we have the chance to observe this more general type? For example, consider the following silly definition:

ignoreFractional :: Fractional a => a -> ()
ignoreFractional x = ()

If we use a type hole to query the type of ignoreFractional's argument, we get Double, even though the type of ignoreFractional is more general than that.

-- Found hole ‘_typeof’ with type: Integer -> Double
main :: IO ()
main = print $ ignoreFractional (_typeof 42)

Let's disable type defaulting, so that it stops skewing our results.

default ()

Note that the syntax for default is default (type1, type2, ...), not default type, type2, ..., so the above is disabling type defaulting, not configuring the unit type to be the new default instead of Integer.

Without type defaulting, our type hole is no longer concluding that ignoreFractional's argument must be a Double. It also complains that some type variables are ambiguous, which is fine for now.

-- Found hole ‘_typeof’ with type: a1 -> a0
-- Where: ‘a0’ is an ambiguous type variable
--        ‘a1’ is an ambiguous type variable
main :: IO ()
main = print $ ignoreFractional (_typeof 42)

More importantly, our type hole still says that balloon's argument must be a Double, so type defaulting did not skew our earlier results after all.

-- Found hole ‘_typeof’ with type: a0 -> Double
-- Where: ‘a0’ is an ambiguous type variable
diagram :: Diagram B
diagram = balloon (_typeof 2)
Evaluating type-level functions

Looking at the most general type of balloon more closely, I think I can figure out how Double was chosen. Ignoring the constraints, the type of balloon is N a -> a. We're imposing a ~ Diagram B, and we've already seen that diagrams likes to use type-level functions. Perhaps N is a type-level function which returns Double when given the input Diagram B?

> :kind! N (Diagram B)
N (Diagram B) :: *
= Double

Indeed it is. Note the exclamation mark in the :kind! command: that's how I am telling ghci that I want to see the result of the type-level function application, not just the kind of this result.

> :help
[...]
:kind[!] <type>             show the kind of <type>
                            (!: also print the normalised type)
The fully-specialized type

We now know that instantiating a ~ Diagram B specializes N a -> a to Double -> Diagrams B, and that this choice for a satisfies all the constraints. I can finally give a short type for balloon:

balloon :: Double -> Diagram B
balloon n = circle 1
        === vrule n

I could have obtained this answer more easily by using the type hole to query the type of balloon directly.

-- Found hole ‘_typeof’
--   with type: (Double -> Diagram B) -> a0 -> Diagram B
-- Where: ‘a0’ is an ambiguous type variable
diagram :: Diagram B
diagram = (_typeof balloon) 2
Intermediate types

So far, we have seen two extremes: the most general, most abstract type of an expression, and its fully-specialized, most concrete type. There are other types in between those two extremes, obtained by fixing some parts of the type signature but not others.

For example, I now know what the type of balloon specializes to when I force the return type to be the fully-specialized type Diagram B. In diagrams, the B identifies the backend I have imported at the top of the file. Which part of the specialization is due to the fact that the result is forced to be a Diagram, and which part is due to the fact that I am using the Cairo backend?

An answer to this question would be a type signature of the following form:

balloon :: _constraint => _input -> Diagram _backend

In which _constraint, _input and _backend would be instantiated so that the resulting type signature would be the most general type which is both a valid type signature for balloon's definition and matches the above form. For example, we know from our previous experiments that _constraint ~ (), _input ~ Double and _backend ~ B would be valid instantiations. Is there a solution which involves more type variables and fewer concrete datatypes?

Proxies and identities

Let's ignore the constraints for now. The problem is now much simpler: we have the most general type N a -> a, we have a shape input -> Diagram backend, and we want to unify those two types. Type unification is GHC's bread and butter, we just need to construct a program whose types will require GHC to unify those two types as part of its ordinary type checking phase.

For example, consider the following restricted identity function:

myId :: (input -> Diagram backend)
     -> (input -> Diagram backend)
myId = id

If we apply myId to balloon, GHC will have to find an instantiation for input and backend or otherwise complain that the call doesn't typecheck.

In order to ask GHC which instantiations it has chosen, we can add an extra parameter whose only purpose is to give us a place to put a type hole.

import Data.Proxy

typeof :: Proxy (input -> Diagram backend)
       -> (input -> Diagram backend)
       -> (input -> Diagram backend)
typeof Proxy = id

-- Found hole ‘_proxy’ with type:
--   Proxy (N backend -> Diagram backend)
query = typeof _proxy balloon

So there you go, a type signature of the form _input -> Diagram _backend involving type variables instead of concrete types:

balloon :: _constraint => N backend -> Diagram backend

We still don't know what those constraints should be, and I don't know how the technique could be generalized to obtain it.

Interestingly, the argument type N a got specialized to N backend, the same type-level function but applied to a different type variable. This makes me think that when N is applied to Diagram backend, the Diagram part is probably stripped off and the result is obtained via the recursive call N backend.

> :set -XRankNTypes
> :kind! forall backend. N (Diagram backend)
forall backend. N (Diagram backend) :: *
= N backend

Indeed it is.

Partial type signatures

With the new support for partial type signatures in GHC 7.10, there is now an easier way for us to discover what _constraint, _input and _backend should be instantiated at. I cannot give the name _constraint to the extra constraint wildcard, but as long as the other wildcards can be named, it's still easy to distinguish which is which.

{-# LANGUAGE NamedWildCards #-}

-- Found hole ‘_’ with inferred constraints:
--   (RealFloat (N w_backend),
--    TrailLike (QDiagram w_backend (V w_backend) (N w_backend) Any),
--    Metric (V w_backend),
--    V w_backend ~ V2)
-- Found hole ‘_input’ with type: N w_backend
-- Found hole ‘_backend’ with type: w_backend
balloon :: _ => _input -> Diagram _backend

I'm guessing that QDiagram is probably related to Diagram in some way. Is Diagram a type synonym for a more complicated QDiagram type?

> :info Diagram
type Diagram b = QDiagram b (V b) (N b) Any

Bingo.

Analyzing the contribution of the backend

Substituting what GHC found for each hole, alpha-renaming, and unexpanding the QDiagram expression back into its Diagram type synonym, I can now give balloon the following type signature.

balloon :: ( RealFloat (N b)
           , Metric (V b)
           , TrailLike (Diagram b)
           , V b ~ V2
           )
        => N b -> Diagram b

That's still quite a mouthful, but compare with the most general type signature we had inferred for balloon earlier:

balloon :: ( RealFloat (N a)
           , Juxtaposable a
           , Semigroup a
           , TrailLike a
           , Transformable a
           , V a ~ V2
           )
        => N a -> a

We can see that specializing a to Diagram b has had quite an impact on the constraint. The Juxtaposable constraint has been removed, presumably because there is an instance Juxtaposable (Diagram b) somewhere. The same goes for Semigroup and Transformable. The TrailLike constraint remains, presumably because the instances for that involve a concrete backend, something like instance TrailLike (Diagram B). The RealFloat and ~ V2 constraints remain as well, but apply to a different type variable, because N (Diagram b) and V (Diagram b) reduce to N b and V b. Finally, the Metric constraint was added, presumably because one of the instances has it as a context, something like instance Metric b => Juxtaposable (Diagram b).

We now have a better idea of which constraints are satisfied by the fact that a is a Diagram, and which parts of the constraint depend on the particular choice of backend. More importantly, we have learned about many techniques for querying the various types of an expression, which is very useful when you are relying on types as much as I am to understand an existing Haskell codebase.