Wednesday, November 01, 2023
You are CheckGPT
Sunday, December 06, 2020
Capturing the magic of Prelude.interact
Ask any Haskeller: pure functions are the best functions, and we should prefer them to IO actions whenever possible. But I think we often give up too easily.
IO-bound thoughts
Suppose we are writing a simple client-server app allowing clients to chat with each other from their terminal. What would the overall structure of that program look like? I bet you’re imagining a lot of IO. Opening network sockets. Listening for client connections. One thread waiting for incoming messages while the other waits for the user to type their own. Atomically mutating the state using STM, so a third thread can watch for changes and redraw the TUI accordingly.
How about a web app, with CRUD endpoints for keeping track of which Haskeller is responsible for each day of the Advent of Haskell 2020 calendar, and which days are still available. Let me guess: handlers have to run in IO so they can talk to the database?
To give you an idea of how we can do better, let’s look at a simpler case in which we do know how to avoid IO.
interact
Suppose we’re writing a command-line tool which counts the number of words in stdin. Aha! Now we can use the “functional core, imperative shell” pattern in order to limit our IO to a thin outer shell. A little bit of unavoidable IO to read stdin, then delegate the bulk of the work to a pure function from String
to Int
, and finish with a bit more unavoidable IO to print the result.
countWords :: String -> Int
countWords = length . words
main :: IO ()
main = do
input <- getContents
let output = countWords input
print output
Or equivalently:
showLn :: Show a => a -> String
showLn a = shows a "\n"
main :: IO ()
main = interact (showLn . countWords)
Wait, I take that back. Those two programs might be semantically equivalent, but in terms of program architecture, there is a huge difference!
In the first program, we have total control of which IO operation executes when, and it would be easy to tweak the details, e.g. to read the input from a file instead of stdin. The cost is that we have to be explicit about which IO operation executes when.
In the second program, the costs and benefits are reversed. We give up that control and let interact
make all the decisions, and the benefit is that we don’t have to write any tricky IO code ourselves. We only need to provide the pure function, which is the kind of function we’d rather write anyway.
Pure frameworks
In the object-oriented community, libraries which take responsibility for the overall execution of the program and ask you to fill in the blanks are called “frameworks”. I’d say interact
is a framework, albeit a very simple one. Let’s call it a “pure framework”, to distinguish that style from the frameworks in which we fill in the blanks with IO actions, which I’ll call “IO frameworks”. In the previous section, we wrote the same program in two styles: the explicit style and the pure framework style.
If we want to stay pure whenever possible, it would make sense to prefer the pure framework style, and to only use the explicit style when we need more control. Of course, there are many situations in which we do need control. But is that really the criteria we use to determine whether we should write explicit IO actions? Or do we tend to give up as soon as we need IO actions at all?
The purpose of this post is to encourage you to consider the pure framework style more often. For your first project in a particular domain, when you’re glad that somebody else made the hard decisions for you. For short projects and one-off scripts, when you can’t afford or don’t want to spend time tweaking the details. As an architectural pattern, where you write your own pure framework as an imperative shell around your functional core.
This holiday season, bring the magic of Prelude.interact
home!
List of pure frameworks
All right, are you excited about pure frameworks? Here is the list of all the pure frameworks I am aware of! I’ll keep it updated as I find more.
base
’s interact: Apply aString -> String
function from stdin to stdout.gloss
’s display: Pan and zoom around a 2D scene described by a purePicture
value.gloss
’s animate: Same, but with an animated scene, via a function from timestamp toPicture
.gloss
’s simulate: Same, but via a stepping function, which is more convenient when simulating e.g. colliding objects.gloss
’s play: The user interacts with the simulation via the mouse and keyboard. Useful for games.codeworld
’s drawingOf: Likegloss
’sdisplay
, but inside a CodeWorld web page.codeworld
’s animationOf: Likegloss
’sanimate
, but inside a CodeWorld web page.codeworld
’s activityOf: Likegloss
’splay
, but inside a CodeWorld web page, and with access to a random seed.codeworld
’s groupActivityOf: Same, but for multiplayer online games! More about this later.
To be clear about what belongs in this list: a pure framework is an IO action which
- is intended to cover the entire program. You would not run multiple pure frameworks one after the other to form a longer program, like you would with normal IO actions like
putStrLn
. - only takes pure functions and values as arguments. No IO actions.
- dictates the control flow of the program. Interpreting a
Free Console
to IO would not count, since the control flow is described by theFree Console
argument.
Make your own
I’m sure there are more, and that the list will grow soon after publication as readers point out the ones I’ve missed. Still, at the time of writing, the above list is disappointingly short: it only mentions base
, gloss
, and codeworld
.
That’s fine: it just means we need to write more pure frameworks. One way to do that is via the architectural pattern I mentioned: write a program and the pure framework it uses at the same time. This way, we still control the details, we can adapt the pure framework to the needs of this particular program. And once we’re done, we can publish the pure framework separately from the program, so that we can reuse it in endeavours in which we care less about the details.
In the remainder of this post, I will demonstrate this approach for the chat application I described earlier.
Pure framework chat
The code for this section is available in the companion repository.
displayTUI
Let’s start with a Hello World. Not just putStrLn "hello world"
, a Terminal User Interface variant which clears the screen and displays “hello world” in the center of the screen until the user presses a key.
imperative version / pure framework version |
I could of course do it imperatively, like this:
getScreenSize :: IO (Int, Int)
putStrAt :: (Int, Int) -> String -> IO ()
drawCenteredTextBlock :: [String] -> IO ()
drawCenteredTextBlock ss = do
(ww, hh) <- getScreenSize
let w = maximum (0 : fmap length ss)
let h = length ss
let x = (ww - w) `div` 2
let y = (hh - h) `div` 2
for_ (zip [0..] ss) $ \(i, s) -> do
putStrAt (x, y + i) s
main :: IO ()
main = do
clearScreen
drawCenteredTextBlock ["hello world"]
void waitForKey
But since I want to use the pure framework style, I would prefer to use something like gloss
’s Picture
to represent a text-based drawing as a value.
data TextPicture
= Text String
| Translated (Int, Int) TextPicture
| Over TextPicture TextPicture
textBlock :: [String] -> TextPicture
textBlock ss
= mconcat [ Translated (0, y) (Text s)
| (y, s) <- zip [0..] ss
]
centeredTextBlock :: [String] -> (Int, Int) -> TextPicture
centeredTextBlock ss (ww, hh)
= Translated (x, y) (textBlock ss)
where
w = maximum (0 : fmap length ss)
h = length ss
x = (ww - w) `div` 2
y = (hh - h) `div` 2
I can now write a simple pure framework which displays a TextPicture
, similar to gloss
’s display
but in the terminal instead of a window.
drawTextPicture :: TextPicture -> IO ()
drawTextPicture = go (0, 0)
where
go :: (Int, Int) -> TextPicture -> IO ()
go (x, y) = \case
Text s -> do
putStrAt (x, y) s
Translated (dx, dy) pic -> do
go (x + dx, y + dy) pic
Over pic1 pic2 -> do
go (x, y) pic1
go (x, y) pic2
displayTUI :: ((Int, Int) -> TextPicture) -> IO ()
displayTUI mkTextPicture = do
clearScreen
screenSize <- getScreenSize
drawTextPicture (mkTextPicture screenSize)
void waitForKey
main :: IO ()
main = displayTUI (centeredTextBlock ["hello world"])
drawTextPicture
and displayTUI
are both IO actions which display a TextPicture
and only take pure values as arguments. But I only consider one of them to be a pure framework, so it’s probably worth taking the time to explain why. As I discovered while writing the “to be clear about what belongs in this list” section, it can be difficult to objectively define what does and doesn’t qualify as a pure framework, because the main factor is a question of intent.
When implementing drawTextPicture
, I was imagining it being called as one small IO action in a larger program. Perhaps the TUI has some widgets on the left, and the chosen values influence which TextPicture
is drawn on the right. With displayTUI
, on the other hand, I had my entire program in mind: clear the screen, display “hello world”, and wait until the user presses a key. It’s a short, but complete program, and displayTUI
is a generalized version of that program which supports more TextPicture
s than just “hello world”.
In particular, compare with the variant simpleDisplayTUI :: TextPicture -> IO ()
which simply takes a TextPicture
instead of a function from screen size to TextPicture
. If I intended the IO action to be part of a larger program, I would prefer that simpler API. If the caller needs the screen size in order to compute their TextPicture
, they can just call getScreenSize
themselves, compute the TextPicture
, and then pass the result to simpleDisplayTUI
. But if the displayTUI
call is the entire program, then there is no room left to perform this pre-call computation, and so displayTUI
must provide the screen size itself.
playTUI
Next, let’s make this look like a chat application, with an edit box at the bottom for typing new messages, and a list of recent messages taking up the rest of the screen’s real estate.
imperative version / pure framework version |
We’ve already talked about TextPicture
, so I’ll omit the details about drawing this UI. Instead, let’s focus on reacting to keyboard input. Here is the imperative version:
data Chat
type Username = String
initialChat :: Chat
addMessage :: Username -> String -> Chat -> Chat
readEditbox :: Chat -> String
handleEditboxKey :: Key -> Maybe (String -> String)
modifyEditbox :: (String -> String) -> Chat -> Chat
renderChat :: Chat -> (Int, Int) -> TextPicture
main :: IO ()
main = do
screenSize <- getScreenSize
flip fix initialChat $ \loop chat -> do
clearScreen
drawTextPicture (renderChat chat screenSize)
waitForKey >>= \case
KEsc -> do
-- quit
pure ()
KEnter -> do
-- add the edit box's message, clear the edit box
loop $ modifyEditbox (const "")
$ addMessage "user" (readEditbox chat)
$ chat
(handleEditboxKey -> Just f) -> do
-- delegate to the edit box
loop $ modifyEditbox f chat
_ -> do
-- unrecognized key; do nothing
loop chat
This flip fix initialValue $ \loop currentValue -> ...
is an idiom for
let loop currentValue = do
...
loop initialValue
which I prefer because it puts the initialValue
at the beginning instead of at the end of a potentially-long ...
block.
Anyway, let’s turn this into a pure framework by turning the application-specific parts into parameters. Those application-specific parts are:
- Which type of value to keep between loop iterations.
gloss
calls it the “world”, Elm calls it the “model”. - How to turn that value into a
TextPicture
. - How to transform that value in response to input events.
The result is playTUI
, a version of gloss
’s play
for TUIs.
playTUI
:: world
-> (world -> (Int, Int) -> TextPicture)
-> (world -> Key -> Maybe world)
-> IO ()
playTUI world0 mkTextPicture handleKey = do
screenSize <- getScreenSize
flip fix world0 $ \loop world -> do
clearScreen
drawTextPicture (mkTextPicture world screenSize)
key <- waitForKey
case handleKey world key of
Nothing -> do
-- quit
pure ()
Just world' -> do
loop world'
handleChatKey :: Chat -> Key -> Maybe Chat
handleChatKey chat = \case
KEsc
-- quit
-> Nothing
KEnter
-- add the edit box's message, clear the edit box
-> Just $ modifyEditbox (const "")
$ addMessage "user" (readEditbox chat)
$ chat
(handleEditboxKey -> Just f)
-- delegate to the edit box
-> Just $ modifyEditbox f chat
_ -> Just chat
main :: IO ()
main = playTUI initialChat renderChat handleChatKey
One minor difference between play
and playTUI
is that my version allows you to return Nothing
in response to an event, in order to indicate that the program should terminate. With play
, the program terminates when the window is closed, but in the terminal there are no windows to close. Another difference is that playTUI
does not ask for a time-has-passed event handler, and thus doesn’t support animations. This is an important feature, but I simply don’t need it for my chat program.
Multiple screens
Currently, the user is stuck with the boring username “user”. Let’s give them a chance to pick their own username instead.
imperative version / explicit version / implicit version |
In the imperative version, we can display the two screens sequentially: first ask the user to pick a username, and then run the main loop of typing and displaying messages.
pickUsername :: IO Username
chatLoop :: Username -> IO ()
main :: IO ()
main = do
username <- pickUsername
chatLoop username
We could define yet another pure framework in the usual way, by abstracting over the application-specific parts: the type being passed from the first screen to the second, the first screen’s model type, the second screen’s model type, etc. But if you’ve written that kind of Elm-style program before, you know that playTUI
is already expressive enough to represent a program with two distinct screens: we just need to pick a sum type for our model, with one constructor for each screen.
data UsernameForm
initialUsernameForm :: UsernameForm
readUsername :: UsernameForm -> Username
modifyUsername :: (Username -> Username) -> UsernameForm -> UsernameForm
renderUsernameForm :: UsernameForm -> (Int, Int) -> TextPicture
handleUsernameFormKey :: UsernameForm -> Key -> Either Username UsernameForm
handleChatLoopKey :: Username -> Chat -> Key -> Maybe Chat
data Program
= UsernameLoop UsernameForm
| ChatLoop Username Chat
initialProgram :: Program
initialProgram = UsernameLoop initialUsernameForm
renderProgram :: Program -> (Int, Int) -> TextPicture
renderProgram = \case
UsernameLoop username
-> renderUsernameForm username
ChatLoop _ chat
-> renderChat chat
handleProgramKey :: Program -> Key -> Maybe Program
handleProgramKey program key = case program of
UsernameLoop usernameForm
-> case handleUsernameFormKey usernameForm key of
Left username
-- the user picked a username; proceed to the chat loop
-> Just $ ChatLoop username initialChat
Right usernameForm'
-- stay in the username form
-> Just $ UsernameLoop usernameForm'
ChatLoop username chat
-> ChatLoop username <$> handleChatLoopKey username chat key
main :: IO ()
main = playTUI initialProgram renderProgram handleProgramKey
One advantage of this approach is that the Program
type explicitly lists all the screens which the user can currently be on, and what their local model types are. Each handler’s type also explicitly states which value is produced at the end of the screen, and handleProgramKey
exhaustively lists all the ways in which the user may transition from one screen to another. One disadvantage of this approach is that all those things are explicit :)
Sometimes being explicit is good (e.g. for readability), and sometimes being forced to be explicit feels like a lot of boilerplate which is slowing us down. So here is an alternative approach.
data Screen = Screen
{ render :: (Int, Int) -> TextPicture
, handleKey :: Key -> Maybe Screen
}
initialScreen :: Screen
initialScreen = usernameScreen initialUsernameForm
chatLoopScreen :: Username -> Chat -> Screen
usernameScreen :: UsernameForm -> Screen
usernameScreen usernameForm = Screen
{ render = renderUsernameForm usernameForm
, handleKey = \case
KEsc
-- quit
-> Nothing
KEnter
-- the user picked a username; proceed to the chat loop
-> Just $ chatLoopScreen (readUsername usernameForm) initialChat
(handleEditboxKey -> Just f)
-- edit the username
-> Just $ usernameScreen
$ modifyUsername f usernameForm
_ -> Just $ usernameScreen usernameForm
}
main :: IO ()
main = playTUI initialScreen render handleKey
By using the record of functions Screen
as our model type, the current screen’s local model type is now hidden inside the closures of those functions. Each handler can thus decide to stay on the current screen by making a recursive call (e.g. when usernameScreen
returns a Just $ usernameScreen ...
), or to transition to a different screen by returning something else (e.g. when usernameScreen
returns Just $ chatLoopScreen ...
).
multiplayTUI
clientServerTUI version / multiplayTUI version |
Finally, let’s add some networking functionality so that our users can actually chat with each other.
The obvious way to do it would be to implement a variant of playTUI
which also accepts a handler for network events:
networkedPlayTUI
:: world
-> (world -> (Int, Int) -> TextPicture)
-> (world -> Key -> Maybe world)
-> (world -> Packet -> Maybe world)
-> IO ()
However, there is a less obvious, but much better API:
multiplayTUI
:: world
-> (world -> Int -> (Int, Int) -> TextPicture)
-> (world -> Int -> Key -> Maybe world)
-> IO ()
The only difference between playTUI
and multiplayTUI
is that there are extra Int
arguments indicating which “player number” (or in our case which chat user) we’re drawing a TextPicture
for and which player pressed a key. The advantage of this API is that it makes it easy to write a multi-user program in which all the users see the same state even though the network latency means each of them is likely to receive events in a slightly different order.
This is a trick which comes straight from CodeWorld’s groupActivityOf
, and I recommend watching the presentation Lock-step simulation is child’s play which explains the magic behind it.
Of particular importance for my goal of promoting pure frameworks is the fact that the magic relies on the two input functions being pure. This allows groupActivityOf
to replay events from an earlier state once it learns of an event it had missed. If the functions were allowed to perform side-effects, then replaying those events would cause those side-effects to occur more often than expected!
Composing pure frameworks?
The example pure frameworks we’ve seen so far make it clear that composing pure frameworks would be quite desirable. I should be able to combine play
with some terminal-specific IO actions in order to construct playTUI
, and you should be able to bolt-in a time-has-passed handler if your program does need animations.
Unfortunately, pure frameworks do not compose. If we have two pure frameworks, we cannot compose them into a larger one because they both want to take control of the application’s interaction loop, and they can’t both succeed.
That being said, monads don’t compose either, and yet we’ve managed to side-step the problem by composing monad transformers instead. I am confident that if we continue exploring the landscape of pure frameworks, somebody will eventually figure it out.
So, to recap, my calls to action are:
- consider the pure framework style more often!
- use the pure framework architecture, then publish the resulting pure frameworks!
- (stretch goal) figure out how to compose pure frameworks!
More Haskell contents
This post is day 7 of the Advent of Haskell 2020 series, a post by a different Haskeller every day. My favourite post so far was Day 5, Processing CodeBlocks in Hakyll. As you can see, my blog looks super old and my code blocks aren’t even syntax-highlighted, so I am looking forward to try using Hakyll and Pandoc to revamp my blog using Haskell!
Thursday, May 28, 2020
Yes, I approve of your license.
Clarifications:
- This applies to all projects, not only to projects which meet some definition of "open source".
- It doesn't matter whether the new license meets some definition of "open source".
- The goal is not for me to receive fewer requests to approve a license change, but to allow the license change to go through even if I cannot be reached.
Sunday, December 31, 2017
N-ary Functors
update: Now available on hackage as n-ary-functor
.
Functor
and Bifunctor
are both in base
, but what about Trifunctor
? Quadrifunctor
? There must be a better solution than creating an infinite tower of typeclasses. Here's the API I managed to implement:
> nmap <#> (+1) <#> (+2) $ (0, 0)
(1,2)
> nmap <#> (+1) <#> (+2) <#> (+3) $ (0, 0, 0)
(1,2,3)
> nmap <#> (+1) <#> (+2) <#> (+3) <#> (+4) $ (0, 0, 0, 0)
(1,2,3,4)
The implementation is really short, too:
{-# LANGUAGE RankNTypes, TypeFamilies, TypeInType #-}
import Data.Kind
newtype NMap1 k (f :: Type -> k) (g :: Type -> k) = NMap1
{ (<#>) :: forall a b. (a -> b) -> NMap k (f a) (g b) }
type family NMap k :: k -> k -> Type where
NMap Type = (->)
NMap (Type -> k) = NMap1 k
class NFunctor (f :: k) where
nmap :: NMap k f f
Of course, the hard part is not writing the code, but figuring out what to write down. Let me show you how I got there.
Computing the Type from the Kind
Since Functor
instances are given to type constructors of kind * -> *
, and Bifunctor
instances are given to type constructors of kind * -> * -> *
, my idea was to compute the type of nmap
from the kind of the type constructor to which it is applied. Something like this:
class NFunctor (f :: k) where
nmap :: NMap k f
type family NMap k (f :: k) :: *
type instance NMap (* -> *) f
= (a -> b) -> f a -> f b
type instance NMap (* -> * -> *) f
= (a1 -> b1) -> (a2 -> b2) -> f a1 a2 -> f b1 b2
type instance NMap (* -> * -> * -> *) f
= (a1 -> b1) -> (a2 -> b2) -> (a3 -> b3) -> f a1 a2 a3 -> f b1 b2 b3
Except of course with some recursive definition for NMap
, so we don't have to spell out the type for every kind. Thinking of it in terms of recursion made me realize that the base case is kind *
, not * -> *
:
type instance NMap * f
= f -> f
This corresponds to a "nullary Functor" typeclass, whose lawful instances have no choice but to use the identity function. So this isn't particularly useful as a typeclass, but it does lead to a nice recursive definition:
type family NMap k (f :: k) (g :: k) where
NMap Type a b = a -> b
NMap (Type -> k) f g = (a -> b) -> NMap k (f a) (g b)
class NFunctor (f :: k) where
nmap :: NMap k f f
I now have to use Type
instead of *
for some reason, otherwise I get a "malformed head" error.
Required Newtype Wrapper
Unfortunately, GHC does not accept that recursive definition. First of all, when defining a type family, type variables aren't implicitly universally-quantified like they are in type signatures, so I need to add an explicit forall
quantifier:
type family NMap k (f :: k) (g :: k) where
NMap Type a b = a -> b
NMap (Type -> k) f g = forall a b. (a -> b) -> NMap k (f a) (g b)
Now GHC reveals the real problem with the definition:
• Illegal polymorphic type:
forall a b. (a -> b) -> NMap k (f a) (g b)
• In the equations for closed type family ‘NMap’
In the type family declaration for ‘NMap’
This is a bummer: I am simply not allowed to use forall
here. The usual workaround, when forall
is needed but disallowed, is to define a newtype which performs the forall
for us:
newtype NMap1 k (f :: Type -> k) (g :: Type -> k) = NMap1
{ runNMap1 :: forall a b. (a -> b) -> NMap k (f a) (g b) }
type family NMap k :: k -> k -> Type where
NMap Type = (->)
NMap (Type -> k) = NMap1 k
This solves the problem, and even allows me to make my NMap
definition more point-free!
Ergonomics
I now have a typeclass which generalizes Functor
, Bifunctor
, Trifunctor
, etc., but what does using this typeclass look like? Writing instances requires a bit of boilerplate, but it's not too bad:
instance NFunctor Maybe where
nmap = NMap1 fmap
instance NFunctor (,) where
nmap = NMap1 $ \f1
-> NMap1 $ \f2
-> \(x1,x2)
-> (f1 x1, f2 x2)
instance NFunctor (,,) where
nmap = NMap1 $ \f1
-> NMap1 $ \f2
-> NMap1 $ \f3
-> \(x1,x2,x3)
-> (f1 x1, f2 x2, f3 x3)
instance NFunctor (,,,) where
nmap = NMap1 $ \f1
-> NMap1 $ \f2
-> NMap1 $ \f3
-> NMap1 $ \f4
-> \(x1,x2,x3,x4)
-> (f1 x1, f2 x2, f3 x3, f4 x4)
When calling nmap
, however, the extra boilerplate quickly becomes annoying:
> runNMap1 nmap (+1) $ Just (0)
Just 1
> runNMap1 (runNMap1 nmap (+1)) (+2) (0, 0)
(1,2)
> runNMap1 (runNMap1 (runNMap1 nmap (+1)) (+2)) (+3) (0, 0, 0)
(1,2,3)
> runNMap1 (runNMap1 (runNMap1 (runNMap1 nmap (+1))
| (+2))
| (+3))
| (+4)
| (0, 0, 0, 0)
(1,2,3,4)
The fix is really simple though: by renaming runNMap1
to some left-associative infix operator, say (<#>)
, the code becomes much more readable!
> nmap <#> (+1) $ Just (0)
Just 1
> nmap <#> (+1) <#> (+2) $ (0, 0)
(1,2)
> nmap <#> (+1) <#> (+2) <#> (+3) $ (0, 0, 0)
(1,2,3)
> nmap <#> (+1) <#> (+2) <#> (+3) <#> (+4) $ (0, 0, 0, 0)
(1,2,3,4)
A Tempting Overlapping Instance
Pairs have both a Bifunctor
and a Functor
instance. Similarly, quadruples have four NFunctor
instances, five if we count the glorified identity function:
-- |
-- >>> nmap <#> (+1) <#> (+2) <#> (+3) <#> (+4) $ (0, 0, 0, 0)
-- (1,2,3,4)
instance NFunctor (,,,) where
nmap = NMap1 $ \f1
-> NMap1 $ \f2
-> NMap1 $ \f3
-> NMap1 $ \f4
-> \(x1,x2,x3,x4)
-> (f1 x1, f2 x2, f3 x3, f4 x4)
-- |
-- >>> nmap <#> (+1) <#> (+2) <#> (+3) $ (0, 0, 0, 0)
-- (0,1,2,3)
instance NFunctor ((,,,) a) where
nmap = NMap1 $ \f2
-> NMap1 $ \f3
-> NMap1 $ \f4
-> \(x1,x2,x3,x4)
-> (x1, f2 x2, f3 x3, f4 x4)
-- |
-- >>> nmap <#> (+1) <#> (+2) $ (0, 0, 0, 0)
-- (0,0,1,2)
instance NFunctor ((,,,) a b) where
nmap = NMap1 $ \f3
-> NMap1 $ \f4
-> \(x1,x2,x3,x4)
-> (x1, x2, f3 x3, f4 x4)
-- |
-- >>> nmap <#> (+1) $ (0, 0, 0, 0)
-- (0,0,0,1)
instance NFunctor ((,,,) a b c) where
nmap = NMap1 $ \f4
-> \(x1,x2,x3,x4)
-> (x1, x2, x3, f4 x4)
-- |
-- >>> nmap (0, 0, 0, 0)
-- (0,0,0,0)
instance NFunctor ((,,,) a b c d) where
nmap = \(x1,x2,x3,x4)
-> (x1, x2, x3, x4)
But if we define the following magical instance:
{-# LANGUAGE FlexibleInstances #-}
instance NFunctor (f :: * -> k) => NFunctor (f a) where
nmap = nmap <#> id
Then we get all five instances for the price of one!
-- |
-- >>> nmap <#> (+1) <#> (+2) <#> (+3) <#> (+4) $ (0, 0, 0, 0)
-- (1,2,3,4)
-- >>> nmap <#> (+1) <#> (+2) <#> (+3) $ (0, 0, 0, 0)
-- (0,1,2,3)
-- >>> nmap <#> (+1) <#> (+2) $ (0, 0, 0, 0)
-- (0,0,1,2)
-- >>> nmap <#> (+1) $ (0, 0, 0, 0)
-- (0,0,0,1)
-- >>> nmap (0, 0, 0, 0)
-- (0,0,0,0)
instance NFunctor (,,,) where
nmap = NMap1 $ \f1
-> NMap1 $ \f2
-> NMap1 $ \f3
-> NMap1 $ \f4
-> \(x1,x2,x3,x4)
-> (f1 x1, f2 x2, f3 x3, f4 x4)
The big problem with that magic instance is that it overlaps with other instances we would like to define. For example, we don't want to define the NFunctor
instance for State s
in terms of the NFunctor
instance for State
, because State
is not functorial in s
, so it doesn't have such an instance. Oh well.
Friday, November 03, 2017
Computing with Impossible Types
Edward Kmett recently posted a puzzling gist seemingly showing that at the type level, the ()
kind has more than one inhabitant. The goal of this post is to explain what's going on.
Stuck Type Expressions
Here is a simple type family.
{-# LANGUAGE TypeFamilies, UndecidableInstances #-}
type family F a where
F (Maybe a) = [F a]
F a = a
Since F (Maybe Int)
and [F Int]
both evaluate to [Int]
, the following type-checks.
-- |
-- >>> :kind! F (Maybe Int) -> [F Int]
-- F (Maybe Int) -> [F Int] :: *
-- = [Int] -> [Int]
runFMaybeInt :: F (Maybe Int) -> [F Int]
runFMaybeInt = id
We didn't use any Int
-specific code, so let's make the type more polymorphic.
-- |
-- >>> :set -XRankNTypes
-- >>> :kind! forall b. F (Maybe b) -> [F b]
-- forall b. F (Maybe b) -> [F b] :: *
-- = [F b] -> [F b]
runFMaybe :: Proxy b -> F (Maybe b) -> [F b]
runFMaybe _ = id
Notice that F (Maybe b)
and [F b]
both evaluate to [F b]
, not to [b]
! That's because we don't yet know whether b
is going to be instantiated with a Maybe
something or not, so unlike F Int
, the type expression F b
cannot be simplified further. The evaluation of F b
is stuck, and will remain so until we learn more information about b
. The code still type-checks though, because even though we don't know which concrete type F b
will expand to, we do know that [F b]
and [F b]
will expand to the same type because they are the same type expression.
Pattern-Matching on the Shape
Here is another type family.
type family G a where
G (f a) = [G a]
G a = Double
This time, the type family isn't pattern-matching on whether or not its input type is a Maybe
something, but on whether or not it is a type which, like Maybe Int
, consists of a type constructor applied to a type. Let's look at a concrete example:
-- |
-- >>> :kind! G (Maybe Int) -> [G Int]
-- G (Maybe Int) -> [G Int] :: *
-- = [Double] -> [Double]
runGMaybeInt :: G (Maybe Int) -> [G Int]
runGMaybeInt = id
No surprises there. Let's make the type more polymorphic:
-- |
-- >>> :kind! forall g b. G (g b) -> [G b]
-- forall g b. G (g b) -> [G b] :: *
-- = [G b] -> [G b]
runGMaybe :: Proxy (g b) -> G (g b) -> [G b]
runGMaybe _ = id
As before, the type expression G b
is stuck because we don't yet know whether b
is going to be instantiated to a type with the right shape such as Maybe Int
, or to a type with a different shape such as Int
. But regardless of which one it is, [G b]
and [G b]
will both expand to the same type, so the implementation type-checks.
One last example:
>>> :kind! forall b. G (G b) -> [G b]
forall b. G (G b) -> [G b] :: *
= G (G b) -> [G b]
Note that G (G b)
did not simplify! G b
might look like it has the right shape to match g b
, but it doesn't, because G
is a type family, not a type constructor. It's a good thing it doesn't match, because if it did, evaluating type expressions like G (G Int)
wouldn't be confluent! If we evaluate the outer application first, we get [G Int]
and then [Double]
, whereas if we evaluate the inner application first, we get G Double
and then Double
.
To be clear, evaluating the outer application first doesn't work because we don't yet know whether the type expression G Int
will evaluate to something of the form f a
or not. So the inner application is evaluated first, and G (G Int)
evaluates to Double
.
Two Arrow-Like Kinds
G
and Maybe
both seem to have kind * -> *
:
>>> :kind! G
G :: * -> *
>>> :kind! Maybe
Maybe :: * -> *
But that's misleading, because there are some circumstances in which a type of kind * -> *
is expected but only Maybe
will be accepted:
class MyMonad (m :: * -> *)
-- ok
instance MyMonad Maybe
-- error: The type family ‘G’ should have 1 argument,
-- but has been given none
instance MyMonad G
And there are other circumstances in which both G
and Maybe
will be accepted:
{-# LANGUAGE DataKinds, TypeOperators #-}
-- |
-- >>> :kind! FMap Maybe '[Int, Double]
-- FMap Maybe '[Int, Double] :: [*]
-- = '[Maybe Int, Maybe Double]
--
-- >>> :kind! FMap G '[Int, Double]
-- FMap G '[Int, Double] :: [*]
-- = '[Double, Double]
type family FMap (f :: * -> *) as where
FMap f '[] = '[]
FMap f (a ': as) = (f a ': FMap f as)
So I prefer to pretend that there are two different arrow-like kind constructors:
(-->)
for type functions which can be applied to a type argument.G
andMaybe
both have kind* --> *
. Just like it isn't possible to pattern-match on a type variable, it is not possible to pattern-match on a type expression whose head is a(-->)
, we must instead apply the type function and pattern-match on the result.(->)
for type constructors which can be pattern-matched on.Maybe
has kind* -> *
, butG
does not. Clearly,(->)
is a subtype of(-->)
.
Now we can make sense of the previous examples. Instance resolution works by pattern-matching on types, so MyMonad
expects a * -> *
, not a * --> *
. Since G
has the wrong kind, it cannot be given a MyMonad
instance. FMap
, on the other hand, only needs to apply its f
to various a
s, so it expects an * --> *
such as G
. Since * -> *
is a subtype of * --> *
, FMap
can also be applied to Maybe
.
edit:
Apparently :kind!
is misleading here. Outside of :kind!
, FMap
accepts Maybe
but not G
. So the situation is simpler than I thought: (-->)
is for type families, (->)
is for type constructors, and those are completely different arrow kinds, there is no subtyping relationship between the two. There is no way to ask for an argument of kind * --> *
, because if we try to pass an "unsaturated" argument with that kind, G
for example, GHC will complain that G
is missing arguments. So MyMonad
and FMap
both expect an argument of kind * -> *
, not * --> *
.
Unusual Type Families
Here are a few surprising, yet legal type families.
-- |
-- >>> :kind! H1 ('Just '())
-- H1 ('Just '()) :: * -> *
-- = Maybe
type family H1 (a :: Maybe ()) :: * -> * where
H1 ('Just a) = Maybe
H1 'Nothing = IO
H1
's input has kind Maybe ()
, not *
, and its output has kind * -> *
, not *
. Note that it's really * -> *
, not * --> *
, so G
is not a valid output. Overall, the kind of H1
is thus Maybe () --> * -> *
.
-- |
-- >>> :kind! H2
-- H2 :: *
-- = Int
type family H2 where
H2 = Int
H2
has no type parameters, so it's kind is *
, not * --> *
. If it returned Maybe
instead of Int
, its kind would be * -> *
instead. A type family's kind can be either * --> *
or * -> *
depending on how it's defined, so it's not as simple as "type constructors use (->)
, type families use (-->)
".
Combining both ideas together:
type family J :: () -> Maybe () where
J = 'Just
J
's kind is () -> Maybe ()
, so it has to return a type constructor which accepts a type of kind ()
and produces a type of kind Maybe ()
. There are only two types which have the kind Maybe ()
: the type 'Nothing
, and the type 'Just '()
. 'Nothing
has the wrong kind, since it doesn't accept a type of kind ()
, but 'Just
is just right.
One last complication:
-- |
-- >>> :kind! H3 Int
-- H3 Int :: *
-- = H3 Int
type family H3 a where
H3
has no equations defining what happens when it is applied to a type argument. As a result, the type expression H3 Int
remains stuck even though it doesn't contain any type variables.
Combining everything together:
type family Succ :: () -> () where
Succ
pretends that it can produce a type constructor which accepts a type of kind ()
and produces a type of kind ()
. This is ridiculous! We know that '()
is the only type of kind ()
, and like 'Nothing
, it has the wrong kind because it doesn't accept a type of kind ()
. There are no valid types which Succ
could return, so unsurprisingly it has no equations, and so Succ
is a type expression which always remains stuck.
Ignoring Impossible Types
The type expression Succ '()
is stuck, but well-kinded. It has kind ()
.
That's the kind which 'Just :: () -> Maybe ()
expects. Thus, 'Just (Succ '())
is also stuck and well-kinded. It has kind Maybe ()
.
That's the kind which our H1 :: Maybe () --> * -> *
type family from earlier expects. Is H1 ('Just (Succ '()))
stuck as well?
>>> :kind! H1 ('Just (Succ '()))
H1 ('Just (Succ '())) :: * -> *
= Maybe
Not stuck! That's because H1
ignores the part which is stuck. Its pattern is ('Just a)
, so it pattern-matches on the 'Just
constructor, but it ignores its argument. If its pattern was ('Just '())
instead, it would have been stuck.
Here comes the clever part: it is possible to write a type family which pattern-matches on the '()
but ignores the stuck Succ
part.
-- |
-- >>> :kind! IsSucc (Succ '())
-- IsSucc (Succ '()) :: Bool
-- = 'True
--
-- >>> :kind! IsSucc '()
-- IsSucc '() :: Bool
-- = 'False
type family IsSucc (a :: ()) :: Bool where
IsSucc (succ '()) = 'True
IsSucc '() = 'False
The trick is to do like G
and pattern-match on the shape, not the contents.
Computing with Impossible Types
It is also possible to distinguish the two inhabitants using a typeclass instead of a type family:
{-# language FlexibleInstances #-}
import Data.Proxy
-- |
-- >>> isSucc (Proxy :: Proxy (Succ '()))
-- True
-- >>> isSucc (Proxy :: Proxy '())
-- False
class IsSucc (a :: ()) where
isSucc :: Proxy a -> Bool
instance IsSucc (succ '()) where
isSucc _ = True
instance IsSucc '() where
isSucc _ = False
The fact that this works is surprising, because ()
is supposed to be a closed kind with only one inhabitant, '()
, and yet here we seemingly have a second inhabitant, Succ '()
, which can be distinguished from '()
even though it is stuck. And as you might surmise from its name, we can manufacture many more inhabitants: Succ (Succ '())
, Succ (Succ (Succ '()))
, etc.
{-# language ScopedTypeVariables #-}
-- |
-- >>> countSuccs (Proxy :: Proxy '())
-- 0
-- >>> countSuccs (Proxy :: Proxy (Succ '()))
-- 1
-- >>> countSuccs (Proxy :: Proxy (Succ (Succ (Succ '()))))
-- 3
class CountSuccs (a :: ()) where
countSuccs :: Proxy a -> Int
instance CountSuccs '() where
countSuccs _ = 0
instance CountSuccs a => CountSuccs (succ a) where
countSuccs _ = 1 + countSuccs (Proxy :: Proxy a)
Those examples show how to compute booleans and integers from a stuck type expression containing Succ
s. Using polymorphic recursion, it is also possible to go the other way, from an integer to a stuck type expression containing that many Succ
s:
{-# language RankNTypes #-}
-- |
-- >>> mkSuccs 42 countSuccs
-- 42
mkSuccs :: Int -> (forall a. CountSuccs a => Proxy a -> r) -> r
mkSuccs 0 cc = cc (Proxy :: Proxy '())
mkSuccs n cc = mkSuccs (n - 1) $ \(Proxy :: Proxy a)
-> cc (Proxy :: Proxy (Succ a))
Since Haskell doesn't have dependent types, the output type is independent of the integer, so we cannot directly return the stuck type as an output. Instead, we use continuation-passing-style to accept a polymorphic continuation which produces an r
regardless of which stuck type we instantiate it at.
When we use countSuccs
as the continuation, this r
is an integer, and the integer it computes is the number of Succ
s. So we start with n
, we convert it to a stuck type containing n
Succ
s, and then we count those Succ
s and get n
back. This is a very simple example of a computation which relies on the existence of those seemingly-impossible non-'()
inhabitants of ()
in order to compute its result: if there was only one type of kind ()
, the integer would be lost during the conversion to Proxy (a :: ())
, and we would not be able to get that same integer back at the end.
Full Circle
Now that we have seen and understood each of the pieces individually, we are now ready to marvel at Kmett's creation:
{-# language PolyKinds #-}
import Data.Proxy
class KnownUnit (k :: ()) where
reflect :: Proxy k -> Int
instance KnownUnit '() where
reflect _ = 0
instance KnownUnit x => KnownUnit (f x) where
reflect _ = 1 + reflect (Proxy :: Proxy x)
type family Succ :: k -> k
-- |
-- >>> reify 23 reflect
-- 23
reify :: Int -> (forall k. KnownUnit k => Proxy k -> r) -> r
reify 0 f = f (Proxy :: Proxy '())
reify n f = reify (n - 1) $ \(Proxy :: Proxy k)
-> f (Proxy :: Proxy (Succ k))
Neat!
...or worrisome?
Accepting Impossible Types
We Haskellers like to use precise types in order to make illegal states unrepresentable. We accept, reluctantly, that ⊥ inhabits all types, so ()
doesn't really have exactly one possible value. But it does have exactly one possible total value, and if we write a function whose type signature expects a ()
, that's the value which this function expects to receive. And so, most functions don't document what their behaviour is on ⊥ inputs, and nobody complains, because they know they're not supposed to use ⊥ inputs.
DataKinds allows us to use precise kinds, and thus to make illegal types unrepresentable. We don't often think about them, but stuck type expressions also inhabit all kinds, so there isn't really only one type of kind ()
. Today we saw that some of those extra inhabitants are really weird. That's an interesting quirk of Haskell's type system, but ultimately, I don't think those weird inhabitants are any more worrisome than their less exotic cousins, the stuck type expressions which contain type variables. After all, there is only one total type of kind ()
, and when we write a type-level function (or an instance) which expects a ()
, that's the type we expect.
Saturday, October 14, 2017
Composing Declarations in Template Haskell
I have recently tried to use Template Haskell to generate both a datatype and lenses for accessing the fields of this datatype, and it was harder than it should have been. In this post, I will demonstrate the problem, I will pinpoint its cause, and I will propose a solution.
The Problem
Consider the following code. I'm using a simple, contrived example instead of a more realistic one because it will be easier to write Template Haskell code for generating this silly code than it would be to write Template Haskell code generating lenses and such.
class Default a where
def :: a
data X = X
data Y = Y X
data DoubledX = DoubledX
data DoubledY = DoubledY X X
instance Default X where def = X
instance Default Y where def = Y def
instance Default DoubledX where def = DoubledX
instance Default DoubledY where def = DoubledY def def
Most of that code is boilerplate, and I would like to generate that boilerplate using Template Haskell. I hasten to note that Template Haskell is a tool of last resort, to be used only when none of Haskell's many other abstraction facilities would have sufficed. In this example, I would probably use some generics library to define a default implementation of def
for any algebraic datatype:
{-# LANGUAGE DefaultSignatures, DeriveGeneric, FlexibleContexts #-}
import Generics.Eot
class Default a where
def :: a
default def :: (HasEot a, Default (Eot a)) => a
def = fromEot def
instance Default () where
def = ()
instance (Default a, Default b) => Default (a, b) where
def = (def, def)
instance Default a => Default (Either a b) where
def = Left def
instance Default X
instance Default Y
instance Default DoubledX
instance Default DoubledY
This works fine, but today I want to talk about one of Template Haskell's limitations, so let's write a Template Haskell implementation instead.
{-# LANGUAGE TemplateHaskell #-}
import Data.List
import Language.Haskell.TH
-- > data Foo = Bar Int String
-- > generateDefault ''Foo
--
-- generates
--
-- > instance Default Foo where def = Bar def def
generateDefault :: Name -> Q [Dec]
generateDefault name = do
-- data Foo = Bar Int...
TyConI (DataD _ _ _ _ (NormalC cname cargs:_) _) <- reify name
-- Bar def...
let expr = foldl' (\c _ -> [|$c def|]) (conE cname) cargs
[d|instance Default $(conT name) where def = $expr|]
data X = X
data Y = Y X
data DoubledX = DoubledX
data DoubledY = DoubledY X X
generateDefault ''X
generateDefault ''Y
generateDefault ''DoubledX
generateDefault ''DoubledY
In addition to the Default instances, we can also generate the Doubled
datatypes, they are a version of the original datatype which has two copies of each field:
-- > data Foo = Bar Int String
-- > generateDoubled ''Foo
--
-- generates
--
-- > data DoubledFoo = DoubledBar Int String Int String
generateDoubled :: Name -> Q [Dec]
generateDoubled name = do
-- data Foo = Bar Int...
TyConI (DataD _ _ _ _ (NormalC cname cargs:_) _) <- reify name
let cons = [NormalC (doubledName cname) (cargs ++ cargs)]
pure [DataD [] (doubledName name) [] Nothing cons []]
doubledName :: Name -> Name
doubledName = mkName . ("Doubled" ++) . nameBase
data X = X
data Y = Y X
generateDoubled ''X
generateDoubled ''Y
generateDefault ''X
generateDefault ''Y
generateDefault ''DoubledX
generateDefault ''DoubledY
So, we can write a Template Haskell function which generates a datatype, and we can write one which generates an instance for that datatype. But can we write one which generates both the datatype and its instance? Both of our functions are Q
actions which produce a [Dec]
, so it looks like a no brainer: we can simply run both Q
actions one after the other and concatenate the resulting lists.
generateBoth :: Name -> Q [Dec]
generateBoth name = (++) <$> generateDoubled name
<*> generateDefault (doubledName name)
data X = X
-- error: ‘DoubledX’ is not in scope at a reify
generateBoth ''X
Sadness, it does not work :(
The Cause
The reason DoubledX
is not in scope when generateDefault
calls reify ''DoubledX
is that the DoubledX
datatype is not created as a side-effect of the generateDoubled ''X
action, but as a side-effect of splicing the resulting [Dec]
into the surrounding code. When concatenating the two lists, this doesn't happen until after both lists have been generated, and so DoubledX
cannot be "reified" while generating the second list.
I didn't thought I'd ever say something like this, but: this pure API was a mistake, an API based on side-effects would be better! I'll qualify that statement though: since reify
obtains information about datatypes (and other named entities) via a side-effect, namely reading from some global symbol table, I think there should be a corresponding action for adding new names to this table. As we have seen, with the current API in which names are added by returning a pure [Dec]
value, declaration templates don't compose, so I think that API was a mistake.
I should note that there is, in fact, an action for adding new declarations as a side-effect:
addTopDecls :: [Dec] -> Q ()
Unfortunately, as of this writing, addTopDecls
is unable to add datatypes:
import Language.Haskell.TH.Syntax
generateBoth :: Name -> Q [Dec]
generateBoth name = do
decs <- generateDoubled name
addTopDecls decs
generateDefault (doubledName name)
data X = X
-- error: Only function, value, annotation, and foreign import
-- declarations may be added with addTopDecl
-- error: ‘DoubledX’ is not in scope at a reify
generateBoth ''X
So the real solution would be to fix the implementation so it also supports datatypes, but until then, I have a simpler solution.
Simple Solution
We can't change declaration-splicing's Q [Dec]
API, but if we create our own API, we can write an adapter which turns it into a Q [Dec]
.
{-# LANGUAGE GeneralizedNewtypeDeriving #-}
newtype LocalQ a = LocalQ { unLocalQ :: StateT [Dec] Q a }
deriving (Functor, Applicative, Monad, MonadFail)
runLocalQ :: LocalQ () -> Q [Dec]
runLocalQ = flip execStateT [] . unLocalQ
The idea is that in addition to Q
's global symbol table, LocalQ
also has access to a local symbol table holding the declarations which have been added within the LocalQ
computation.
addLocalDecls :: [Dec] -> LocalQ ()
addLocalDecls decls = LocalQ $ modify (++ decls)
reifyLocallyFirst :: Name -> LocalQ Info
reifyLocallyFirst name = do
decls <- LocalQ get
case find (hasName name) decls of
Just dec -> pure $ TyConI dec
Nothing -> liftQ $ reify name
-- for this simple example, I'm only interested
-- in datatype declarations
hasName :: Name -> Dec -> Bool
hasName expected (DataD _ actual _ _ _ _) = actual == expected
hasName _ _ = False
liftQ :: Q a -> LocalQ a
liftQ = LocalQ . lift
If we reimplement our declaration templates to use this new, better API...
locallyGenerateDefault :: Name -> LocalQ ()
locallyGenerateDefault name = do
info <- reifyLocallyFirst name
let TyConI (DataD _ _ _ _ (NormalC cname cargs:_) _) = info
let expr = foldl' (\c _ -> [|$c def|]) (conE cname) cargs
decls <- liftQ [d|instance Default $(conT name) where def = $expr|]
addLocalDecls decls
locallyGenerateDoubled :: Name -> LocalQ ()
locallyGenerateDoubled name = do
info <- reifyLocallyFirst name
let TyConI (DataD _ _ _ _ (NormalC cname cargs:_) _) = info
let cons = [NormalC (doubledName cname) (cargs ++ cargs)]
addLocalDecls [DataD [] (doubledName name) [] Nothing cons []]
...then this time we can compose them just fine:
locallyGenerateBoth :: Name -> LocalQ ()
locallyGenerateBoth name = do
locallyGenerateDoubled name
locallyGenerateDefault (doubledName name)
data X = X
data Y = Y X
runLocalQ $ locallyGenerateDefault ''X
runLocalQ $ locallyGenerateDefault ''Y
runLocalQ $ locallyGenerateBoth ''X
runLocalQ $ locallyGenerateBoth ''Y
Happiness, it works! Now all that's left is to convince everybody to rewrite their declaration templates using LocalQ
instead of Q
, and we'll finally be able to reuse each other's code.
Final Solution
Okay, so that last part is probably not going to happen. If only there was a way to monkey-patch existing Q
code so it would use reifyLocallyFirst
instead of reify
...
Well, here's a little-known fact about Q
:
newtype Q a = Q { unQ :: forall m. Quasi m => m a }
That's right, Q
isn't some magic Monad which only the compiler can execute! It's a concrete type, which we can examine and manipulate however we want. The finally-tagless encoding might be a bit intimidating, but in practice, a Q a
value is basically an AST listing which Q
actions need to be performed in order to produce an a
. So we should be able to dive in and replace all the reify
calls with reifyLocallyFirst
calls, no problem.
The finally-tagless way to do that is to write a Quasi
instance which instantiates reify
with reifyLocallyFirst
, and delegates all the other operations to some other Quasi
instance:
instance Quasi LocalQ where
qReify = reifyLocallyFirst
qAddTopDecls = addLocalDecls
-- Most of those aren't being exercised by my simple example,
-- so I can afford to use 'undefined' for the trickier methods.
qGetQ = undefined
qPutQ = undefined
qRecover = undefined
qAddDependentFile x = liftQ $ qAddDependentFile x
qAddModFinalizer x = liftQ $ qAddModFinalizer x
qExtsEnabled = liftQ $ qExtsEnabled
qIsExtEnabled x = liftQ $ qIsExtEnabled x
qLocation = liftQ $ qLocation
qLookupName x y = liftQ $ qLookupName x y
qNewName x = liftQ $ qNewName x
qReifyAnnotations x = liftQ $ qReifyAnnotations x
qReifyConStrictness x = liftQ $ qReifyConStrictness x
qReifyFixity x = liftQ $ qReifyFixity x
qReifyInstances x y = liftQ $ qReifyInstances x y
qReifyModule x = liftQ $ qReifyModule x
qReifyRoles x = liftQ $ qReifyRoles x
qReport x y = liftQ $ qReport x y
qRunIO x = liftQ $ qRunIO x
With this instance, I can now transform Q [Dec]
templates into LocalQ ()
templates, and the transformed version will use reifyLocallyFirst
instead of reify
.
localize :: Q [Dec] -> LocalQ ()
localize declarationTemplate = do
decls <- unQ declarationTemplate
addLocalDecls decls
generateBoth :: Name -> Q [Dec]
generateBoth name = runLocalQ $ do
localize $ generateDoubled name
localize $ generateDefault (doubledName name)
data X = X
data Y = Y X
generateDefault ''X
generateDefault ''Y
generateBoth ''X
generateBoth ''Y
Notice that I am reusing the original generateDefault
and generateDoubled
implementations, those which caused the reify error when I first tried to implement generateBoth
. I am not using the locallyGenerateDefault
and locallyGenerateDoubled
reimplementations from the previous section. This means that (with a more fleshed out implementation of LocalQ
), I should be able to reuse any existing declaration template out there, including Control.Lens.TH.makeLenses
! :D
Saturday, December 26, 2015
A whirlwind tour of Haskell, day 8
Day 8, functionally: pattern-matching
Today's puzzle involves parsing: we are given a string literal containing escape sequences, and we must interpret those escape sequences into the characters they represent. Or at least into the correct number of characters, whatever.
In my puzzle solutions so far, I have always pattern-matched at most one level deep, meaning that the pattern if any was always of the form (Constructor arg1 arg2 ...)
. Instead of binding those arguments to variables, I can also choose to nest more constructors. When pattern-matching on the list of characters of a string, nesting multiple patterns allows me to match on several characters at once, making it easy to recognize each of the three escape sequences specified in the puzzle using one pattern each.
import Data.Char
-- |
-- >>> print (parse "\"(\x20\\\"\x20)\"")
-- "( \" )"
parse :: String -> String
parse ('"':xs0) = go xs0
where
go [] = error "unterminated string literal"
go ('"' :[]) = []
go ('\\':'\\' :xs) = '\\' : go xs
go ('\\':'"' :xs) = '"' : go xs
go ('\\':'x':x1:x2:xs) = hex x1 x2 : go xs
go (x :xs) = x : go xs
parse _ = error "not a string literal"
hex :: Char -> Char -> Char
hex c1 c2 = chr (16 * digitToInt c1 + digitToInt c2)
If we were writing an imperative implementation, we would probably have some kind of current index into the string which we would increment by 1, 2, or 4 characters depending on the pattern. It would be easy to accidentally increment that index by the wrong number of characters. By contrast, notice how the pattern-matching approach is less error-prone, as it naturally encourages us to give a name to the part of the string which isn't covered by each pattern.
Day 8.5, combinatorially: invertible-syntax
Once the second part is revealed, we can see that the complete puzzle consists of two related tasks: parsing a string literal into a string value, and pretty-printing a string value into a string literal. Whether imperative or functional, functions convert inputs to outputs, and so it would be natural to implement both directions as two separate functions. But why restrict ourselves to functions? While Haskell is best known for being a purely functional programming language, "functional" programming does not imply that everything is a function. In fact, we regularly construct other abstractions, new kinds of building blocks which can be composed into larger ones: the "combinator libraries" I briefly mentioned on Day 1. So let's put on our combinatorial glasses, step away from functions for a bit, and ask ourselves: What kind of building blocks would be useful for this problem? How can those building blocks be composed into larger building blocks?
The invertible-syntax
library proposes the following approach: each building block shall know how to parse and to pretty print a value of a particular type, for example a character, and those building blocks can be combined into larger building blocks that know how to parse and pretty-print more complicated values, for example a string.
As usual, I'll write doctest comments above each definition to demonstrate what each building block does. Since I almost always want to demonstrate that a string s
parses into a value x
and that this same value x
pretty-prints into the same string s
, I'll use the following helper function to test both directions at once.
{-# LANGUAGE RankNTypes #-}
import Prelude hiding (print, pure, (<$>), (<*), (<*>), (*>))
import Text.Syntax
import Text.Syntax.Parser.Naive
import Text.Syntax.Printer.Naive
-- |
-- >>> test (pure 42) "" 42
-- True
-- >>> test (text "foo") "foo" ()
-- True
test :: Eq a => (forall f. Syntax f => f a) -> String -> a -> Bool
test block s x = (parsedValue == x) && (printedString == Just s)
where
parsedValue = head (parse block s)
printedString = print block x
The two primitive building blocks I have demonstrated in the doctest comment are dual to each other: pure
consumes no characters when parsing and always produces the same value, while text
always consumes the same characters and doesn't produce anything useful as its value. They can be combined to describe "atoms", that is, a particular value which is represented using a particular string. Useful atoms for this puzzle include the backslash and double quote characters, which are represented using their respective escape sequences.
atom :: (Eq a, Syntax f) => String -> a -> f a
atom s x = pure x <* text s
-- |
-- >>> test backslash "\\\\" '\\'
-- True
backslash :: Syntax f => f Char
backslash = atom "\\\\" '\\'
-- |
-- >>> test doubleQuote "\\\"" '\"'
-- True
doubleQuote :: Syntax f => f Char
doubleQuote = atom "\\\"" '\"'
The hexadecimal escape sequence is a bit more complicated. I'll build it out of smaller atoms, one for each hexadecimal digit:
type Word4 = Int
type Word8 = Int
-- |
-- >>> test (hexDigits !! 12) "c" 12
-- True
hexDigits :: Syntax f => [f Word4]
hexDigits = [atom [intToDigit i] i | i <- [0..15]]
oneOf :: Syntax f => [f a] -> f a
oneOf = foldr (<|>) empty
-- |
-- >>> test hexDigit "a" 10
-- True
hexDigit :: Syntax f => f Word4
hexDigit = oneOf hexDigits
I've used the binary choice combinator (<|>)
from invertible-syntax
to construct an n-ary choice operator oneOf
acting on lists, and I used it to combine my atoms into a more useful building block hexDigit
which can parse and pretty-print any hex digit.
Next, I'd like to combine two of those into a full byte, but with only the invertible-syntax
combinators, I can only combine them into a pair of nibbles (a number between 0 and 15):
-- |
-- >>> test nibbles "6f" (6, 15)
-- True
nibbles :: Syntax f => f (Word4, Word4)
nibbles = hexDigit <*> hexDigit
Given a pair of nibbles, I can easily compute the value of the corresponding byte. However, since our building blocks are going in both directions, we will also need a formula for splitting a byte into a pair of nibbles. This way, the parsing process will extract two nibbles and merge them into a byte, and the pretty-printing process will split the byte into two nibbles and will pretty-print each one.
import Control.Isomorphism.Partial.Prim
import Control.Isomorphism.Partial.Unsafe
iso :: (a -> b) -> (b -> a) -> Iso a b
iso f g = Iso (Just . f) (Just . g)
mergeNibbles :: Iso (Word4, Word4) Word8
mergeNibbles = iso (\(x,y) -> x * 16 + y) (`divMod` 16)
-- |
-- >>> test byte "6f" 111
-- True
byte :: Syntax f => f Word8
byte = mergeNibbles <$> nibbles
-- |
-- >>> test hexChar "\\x6f" 'o'
-- True
hexChar :: Syntax f => f Char
hexChar = iso chr ord <$> text "\\x" *> byte
invertible-syntax
has a companion library called partial-isomorphisms
for constructing such pairs of inverse functions. It is a combinator library in its own right, with many primitives and combinators ensuring that the resulting functions really are inverses of each other, but in this case I prefer to use the unsafe portion of the library, which allows me to specify an arbitrary pair of functions and to tell the library to trust me that those are inverses of each other.
Now that we have covered all the escape sequences, the last kind of character which could occur in a string is a non-escaped character.
-- |
-- >>> test otherChar "o" 'o'
-- True
otherChar :: Syntax f => f Char
otherChar = subset (`notElem` ['\\','\"']) <$> token
subset
builds a partial isomorphism by taking the identity isomorphism which doesn't change the value in either direction, and restricting its domain to the values satisfying a predicate. In this case, we don't want backslash and double quote characters to be included in the domain of non-escaped characters, because those characters do need to be escaped.
So far I've been talking about functions going in two directions as if the two functions were true inverses of each other, but in fact as the name implies, partial isomorphisms are only partial inverses. The reason we don't need the functions to be exact inverses is that a given value may have several different, equally valid textual representations. For example, the character 'o'
can be represented either as "o"
or as "\x6f"
, and both encodings are valid. But when we convert those two representations to the single value 'o'
and then back to a string, the function which converts 'o'
into a string can only choose one answer! Suppose it chooses "o"
: the other value, "\x6f"
, demonstrates that the functions are not inverses, because applying one function to "\x6f"
yields 'o'
but applying the other function to 'o'
does not yield back "\x6f"
.
Now that we have building blocks for every kind of character, we can combine them into one more complicated building block which knows how to parse and pretty-print any character.
-- |
-- >>> test char "o" 'o'
-- True
-- >>> head (parse char "\x6f")
-- 'o'
-- >>> print char 'o'
-- Just "o"
char :: Syntax f => f Char
char = oneOf [backslash, doubleQuote, otherChar, hexChar]
While parsing, escaped backslashes and double quotes will be parsed into backslashes and double quotes, non-escaped characters will be parsed into themselves, and escaped hexadecimal codes will be parsed into the corresponding ascii character. When pretty-printing, however, no characters will be printed in hexadecimal, because every character is either a backslash, a double quote, or in the domain of otherChar
. So the pretty printer will never fall back to hexChar
, and 'o'
will be printed as "o"
, not "\x6f"
. We could of course have chosen otherwise by ordering the list of building blocks differently.
Now that we can handle characters, we're finally ready to describe how to handle an entire string.
-- |
-- >>> test string "\"( \\\" )\"" "( \" )"
-- True
-- >>> head (parse string "\"(\x20\\\"\x20)\"")
-- "( \" )"
-- >>> print string "( \" )"
-- Just "\"( \\\" )\""
string :: Syntax f => f String
string = between (text "\"") (text "\"") (many char)
This approach ended up requiring much more code than simply writing separate functions for parsing and pretty-printing strings. It is, however, much less error-prone: if we end up changing the way in which strings can be represented, for example by supporting more kinds of escape sequences, we will only have to modify the code once. In contrast, with two separate functions, it would be easy to forget to update the other function, or to accidentally modify the functions in a way which causes them to no longer be partial inverses.
Navigation
- About this series
- Previous day: laziness-based caching and meta-programming