Wednesday, April 30, 2008

Choice with Monads: List, Maybe, Either

We've seen up to now what monads are and how they can be useful in simple ways, such as for heavy lifting on recursive addition. So, monads are useful for housekeeping when you have more work than a computer could handle in a straightforward manner. This work is called deterministic, meaning that the computation occurs in only one way. As we have seen, monads can be helpful with this kind of computation. But monads can be helpful with nondeterministic computations, or computations that proceed along multiple possible paths, as we shall see.

Haskell comes with three kinds of monads that have been used specifically for nondeterministic computation: the Maybe monad, the list data type and, a new one, the Either monad.

We saw the first one in the previous post: the Maybe monad. This monad type has two instances: Nothing and Just x (where x is the specific value of the computation). The Maybe monad is illustrated by the two dialogues below:

Scenario 1
Waiter: How is the pork chop, can I get you anything to go with that?
Custamah: Oh, Nothing for me, thanks.
Waiter:Wonderful, enjoy your meal.
Scenario 2
Waiter:How is the pork chop, can I get you anything to go with it?
Custamah:Oh, Just a small bowl of applesauce, please?
Waiter:Sure, I'll bring that right out

The waiter in the above two scenarios doesn't know exactly what the customer will want, but that waiter is pretty sure the customer will ask for Nothing or for Just something, and these options describe the Maybe monad type.

Another example of this kind of monad is the list data type. But whereas the Maybe monad allows two options (the answer or failure), the list data type (a monad) allows multiple answers (including no answers, which is represented by the empty list). These kinds of monads form a protocol called the MonadPlus class, just as the more general monad data types form the more general protocol of the Monad class, and just like regular monads, conform to a set of laws.

First, let us specify and explain what the MonadPlus protocol is. All MonadPlus types must have the following two properties defined:

mzero :: m a -- the base, or usually interpreted as fail, value; and,
mplus :: m a → m a → m a -- a function that chooses a success value when offered two values

For the Maybe MonadPlus type the above properties are defined as follows:

mzero = Nothing
`mplus` b = b
a `mplus` b = a

In other words, Nothing is the failure case, and mplus tries to choose a non-Nothing value (roughly: "If a is Nothing, pick b; otherwise pick a." Here's a question for you: what happens when both a and b are Nothing, and for what reason?) Note the interesting semantics of mplus -- it is not at all addition, as we expect, for:

Just 3 `mplus` Just 4 = Just 3

Recall that if we wish to do monadic addition, we need to define such an operator.

madd :: (Monad m, Num a) => m a → m a → m a
madd = liftM2 (+)
Just 3 `madd` Just 4 = Just 7

So, now madd has the triple meaning here: it is not mplus (which is not addition), it is addition for monads containing numbers, and it either heightens awareness or annoys the cause of "MADD". Got all that?

The Maybe type has a special handler, called maybe. Its type signature is:

maybe :: b → (a → b) → Maybe a → b

What does this function do? Well, we've already seen it in action with the monadic Ackermann and Fibonacci solutions. One can read the arguments from right to left, to get the feel of an if-then-else: if the last argument is Just a, then pass a to the second argument (which is a function that converts an a to the proper return type); else execute the first argument. A very compact and useful function when working with Maybe types.

The second most commonly used data type used for non-deterministic computation is the list MonadPlus data type. It has an interesting variation from the Maybe definition:

mzero = []
mplus = (++)

In other words, the empty list ([]) is the base (failure) case, and mplus here actually is addition ('concatenation', to be technically correct); addition, that is, in the list-sense. But it all works out, particularly when it comes to the base cases, for:

[3] `mplus` [] = [3]
Just 3 `mplus` Nothing = Just 3

But, on the other hand, mplus is differnet when handling non-base cases for the Maybe and list monad types, for:

 [3] `mplus` [4] = [3, 4]
Just 3 `mplus` Just 4 = Just 3

But this difference is consistent with the different types: the list monad allows for multiple solutions, whereas the Maybe monad allows only one.

The list data type has too many special functions associated with it to review in this post. I recommend a review of the Haskell online report to get a taste of list's rich functionality, and then read Eric Kidd's post on backtracking with monads for some insights into using list monads in nondeterministic programming.

The third data type that is used, albeit less frequently, for non-deterministic computation is the Either data type. It's structure is as follows:

data Either a b = Left a | Right b

The way Either operates is that it offers a mutually-exclusive choice. For example, little Isabel sits to my Left and her até Elena Marie sits to my Right, so at 4 p.m. I must choose Either one to serve tea first: Left Isabel or Right ElenaMarie.

The interesting distinction of the Either monad to MonadPlus types such as the list data type and the Maybe monad is that both options are weighed equally, or, more to the point, neither is considered to be the base case. This means that Either, qua Either, is not in the MonadPlus class. With this caveat, can the Either type be used for non-deterministic computation? Yes, absolutely!

Not only can the Either type be used in its basic monadic form, but it also can be coerced into the MonadPlus class. How? It's simple, really. By simply choosing one of the branches to be the base (the Haskell library designers chose Left), the Either type now conforms to that protocol. The convention assigns the error message (a String) to the Left and the value sought is assigned to the Right one. This rather reduces Either to a glorified, error-handling, Maybe, and that is how it is used in every-day Haskell code for the most part.

The Either monad also has a special handler, either, with the type signature of:

either :: (a → c) → (b → c) → Either a b → c

This function is in the same vein as the Maybe handler, but complicated by the fact that maybe has only one (success) type to handle, whereas this function has two possible types it deals with -- either's type translates as: if the answer from the third argument (Either a b) is Left a, then feed a to the first argument (a function that converts the input value of type a to the output of type c), but if the answer from the third argument is of type Right b, then feed b to the second argument (a function that converts the input value of type b to the output of type c).

What we've seen in this entry is an introduction to the MonadPlus class and three examples of monads that allow for choice, Maybe, the list data type and Either, and saw an example for each which demonstrated their ability to code with choice.

The next entry will further explore the MonadPlus class and some of its powerful functions, such as msum and guard, and how the MonadPlus class allows us to code in a declarative nondeterministic style.

Monday, April 28, 2008

No "fib"bing; getting into the monad groove

The title is groan-inducing, as all my "jokes" are. I guarantee it: once you've finished reading this entry, reread the title, and you won't be able to stop from groaning.

The Fibonacci series goes as follows: 0,1,1,2,3,5,8,13..., or put another way, the current number is obtained by adding the previous two numbers. It is useful for many things, as its limit is the golden ratio, found in nature (the spiral of some crustaceans and the population growth of rabbits [they must be stopped!]) and in artifice (windows, painting, doors, buildings follow the width and height of this ratio)).

Any fibonacci number can be easily computed from the following formula ...

fib :: IntegerInteger
fib n | n ≡ 0 = 0
| n ≡ 1 = 1
| otherwise = fib (n-1) + fib (n-2)

... that is, easily computed if this world were purely mathematical, including the caveat that any computable function could be instantly computed. Run on a computer, fib 25 slows noticeably and fib 50 may as well describe the halting problem, because I wasn't going to wait around for it to terminate.

Note the similarity between the computation of the Fibonacci series to the computation of the Ackermann table. They are not the same kind of problem, mind you, as the Ackermann is not primitively recursive; the Fibonacci is "only" doubly (branching) recursive. But they are similar enough in that they can be solved in similar ways. Given that the current Fibonacci number is the sum of the previous two Fibonacci numbers, we need only a (reversed) list to memoize the previous results, so the above sequence becomes:

[..., 13, 8, 5, 3, 2, 1, 1, 0]

and the "next" Fibonacci number would be simply the first two elements of this list (21, in this case). But how do we know where we are in the sequence? Easy: the length of this list tells us where we are, and in this case, the list has 8 elements, meaning the "next" Fib is 9th in the sequence.

So, turning to monads with this list structure for memoization, the code falls out as follows:

fib :: IntFibS
fib n = get >>= λmem . maybe (fib' n >>= update)
(gimme mem n)

So, fib is now simply a decision: "Do I have the fib requested in the list?" "Yes: gimme it and return it" or "No: compute it and then update the list"

The list is a very slight variation on a regular list type, as we choose to carry around its length (as opposed to recomputing it at each iteration), and we lift this new datatype into the State monad (as we did with the Map datatype for our Ackermann monad optimization):

data Mem = Mem [Integer] Int
type FibS = State Mem Integer

The actual computation function is lifted into the monadic form with very little variation ...

fib' :: IntFibS
fib' n | n ≡ 0 = return 0
| n ≡ 1 = return 1
| otherwise = liftM2 (+) (fib (n - 1)) (fib (n - 2))

... where monadic addition, liftM2 (+), replaces addition in the usual sense (recall the spot of bother we had "adding" Thing One to Thing Two), and where the plain numbers are lifted into the monad with return. In brief, the substance is now monadic but the structure is the same as our original, plain, fib.

The other housekeeping functions are new for the monadic solution, but what one would expect. The update follows in the same vein as the one for the ackermann monad:

update :: Integer -> FibS
update n = do (Mem lst len) ← get
put (Mem (n:lst) (len + 1))
return n

An interpretation of the update function is that it is the identity function with the side-effect that it remembers the result that it returns.

The only other function is the self-describing gimme which retrieves a previously-computed fibonacci number from memory:

gimme :: MemIntMaybe Integer
gimme (Mem (h:t) len) n | len ≡ n = Just h
| len > n = let x = (len - 1) - n
in Just (t !! x)
| otherwise = Nothing

This gimme function uses the Maybe monad and its functionality, saying "If I have the value already computed [If the list length is equal to or greater than the requested index], then return Just that value; otherwise I've got Nothing, so you need to compute that value."

In summary, we've decorated the naïve fibonacci algorithm with some memory and three new functions (one manager and two support functions). What is the payoff?

Ready> fib 1500
[no delay]

fib 50000
[4 second delay]
[10615 digits]

Ready> fib 60000
[2.5 second delay]
*** Exception: stack overflow

These results are a great step forward over the naïve fib implementation (which essentially froze at fib 50) and even memoized implementation reported elsewhere (which ran out of memory after fib 1450).

Huzzah! then for efficient program representation in Haskell and monadic code.

Sunday, April 27, 2008

"On an ordinary day ..."

"...the extraordinary way you take what I can give and you treasure it."

Yesterday was my 41st birthday, so I'm now in my 42nd year. My sister, Beki, asked me, "So how does it feel to be 41?" and I answered that it feels like any other age -- an ordinary day. "Good," she replied. She's so grounded.

So, my cara spoza treated me for two outings: Li'l Iz and I had Queen Elizabeth tea at the Pinkadilly and then Elena Marie and I spent the evening watching the Nationals get shut out by the Chicago Cubs (no hard feelings: the Cubs have a birthday boy, too -- actually, I blanched when reading the birthyears of the players. Fukudome is a grizzled ancient, being born in 1977. I saw player cards of kids born in 1985. How can they hit a fast ball (and hit it well!) when they aren't even out of diapers yet is a mystery to me.)

I also received a nice birthday present from the UNSC, and finished up the day completing my first hard masyu puzzle.

A good day. Now, back to taxes.

Wednesday, April 23, 2008

Oh! my acking monad!

So, we just saw, after reams of paper on proving in inscrutable detail the three monadic laws, that after all that, we see that using them is pretty easy. Fancy that, Hedda!

Haskell does provide several different kinds of monads for various uses, but sometimes one must roll one's own to fit the current need. I was exploring the ackermann function, curious to know what A4,2 looked like [oh, yes, 19730 digits in all its (gory) glory -- I went there!] (other than the cop-out of 265536-3 that is listed in Wikipedia), and whether my computer could calculate and display that value [obviously it could, but it took some work (see below) to arrive at that point]. The ackermann function written in Haskell looks very much like its representation in standard mathematical notation:

a :: IntegerIntegerInteger
a m n | m ≡ 0 = n + 1
| m > 0 ∧ n ≡ 0 = a (m - 1) 1
| m > 0 ∧ n > 0 = a (m - 1) (a m (n - 1))

The "problem" with the Ackermann function is that even though it is easy to write, the recursion required to arrive at a solution from even small (single digit) inputs is staggering. My computer staggered and then killed the computation in mid-process (too bad Redjak didn't have that option). The Ackermann function, being not primitively recursive, is also highly redundantly recursive, so it would be nice to provide "hints" to the program, something like: "Oh, you've already computed A3,1 to be 13, so you don't need to recompute that entire branch any more."

This "hinting" has a name in computer science; it's called "memoization", and Haskell provides the State monad that can be used to cover that functionality quite nicely. As we're computing the values that build up the solution, we put the smaller-indexed solutions into a dictionary, something like:

... and so forth ...

Such a dictionary in Haskell is called a Map, because we are mapping from the "word" (the indices of the Ackermann function) to the "definition" (the solution)...

type Index = (Integer, Integer)
type AckMap = Map Index Integer

... and we wrap that Map with the State monad ...

type AckMapS = State AckMap Integer

... where AckMapS uses the AckMap dictionary to provide the preexisting (partial) solution, or, given there's none yet, populates the solution calculated (the Integer) into the dictionary at the current Index. Simple, yes? So, all we need is an utility function that does the lookup or updates the state ...

ackify :: IntegerIntegerAckMapS
ackify m n = get >>= λma . maybe (a' m n >>= update m n)
(Map.lookup (m, n) ma)

... the update function that actually does the housekeeping ...

update :: IntegerIntegerIntegerAckMapS
update m n ans = do mappo ← get
put (Map.insert (m, n) ans mappo)
return ans

... so that our monadic version of the ackermann function is as follows:

a' :: IntegerIntegerAckMapS
a' m n | m ≡ 0 = return (n + 1)
| m > 0 ∧ n ≡ 0 = ackify (m - 1) 1
| m > 0 ∧ n > 0 = ackify m (n - 1) >>= ackify (m - 1)

which looks very much the same as the original function, with calls to a being replaced by calls to the lookup-or-update ackify function and function composition (e.g. a (m - 1) (a m (n - 1))) being replaced by the monadic composer, >>=. So, from the above demonstration we see that monads are not only easy to use, but also easy to "roll your own" and integrate into preexisting non-monadic code without much fuss.


Although the incorporation of the State monad dramatically speeds processing solutions and makes some solutions computable that were unreachable under the naïve approach, there are at least two better generalizations:

  1. Use the fixpoint (or the Y-combinator) of the ackermann function so that one can now decorate the engine of computation without altering its structure at all! Incredible! So instead of using a monad to abstract stateful decoration of computations, the fixpont can be used to abstract any decoration of computation!

  2. Use the compiler to change the code, instead of having it programmed in. Many functional language compilers have the option to memoize computations automagically. So, instead of writing code that memorizes partial results, the compiler intercepts the computation, replacing computation with previously calculated values (or running the computation if no such value exists and then storing that result into the runtime). No extra coding; no "troublesome" monads.

Critique on the critique

On the other hand ("there are five fingers", as my favorite Mother-in-law enjoys rejoining),

  1. The fixpoint solution is more general and more elegant, but being more general suffers in a slight decrease in efficiency: it more than likely will run more slowly than the specialized monadic state solution

  2. With an explicit encoding of state, the programmer has direct say into what is memoized and what is not, but with a compiler directive, everything is memoized. Since the Ackermann function can be represented by simple functions for m < 4, the map can be replaced by a faux-map for those values, reducing the memory footprint of memoization to only a couple of cells for m == 4 (as opposed to over 150,000 cells for m == 4 and n == 1 using the naïve memoization scheme).

... so the above described solution may be "good enough" for the task at hand, after all. Put another way, when I said I was wrong, I might have been wrong. Caveat programmer.

Orators' exercise

I practice this exercise daily. Don't sight read it; read it aloud. The Haskell program that produced this uses a monad; the heart of which is as follows:

-- parses a poem, refering words to CMU for pronounciation

parsePoem = do p ← getContents
return (checkEachLine (lines p))

main = do p ← parsePoem
putStr p

Do you see the monad? Well, that's a rather mean question, because the monad is entirely implied in this program fragment.

One of the motivators for incorporating monads into Haskell was dealing with things in the "real world" that fall outside easy descriptions in purely functional (mathematical) domains. One of those things is input and output. Whereas an equation...


... was true yesterday, is true now, and will be true tomorrow (albeit, it took over 100 pages of introduction to set theory in the Principia Mathematica to be able to back up that truth), things of the "real world", particularly input and output change, and change irrevocably. For example, if I buy and eat some ice cream (mint chocolate chip, please!), I'm not getting a refund. The state of the world has changed: I'm parted from my money, and that scoop of ice cream is no longer in the shop's inventory (and they have a Form 1120S, Schedule A to prove it). Input/output, as part of the state of the world, is one of the members of the awkward squad. So, to help it to be a used and useful language, the Haskell designers included the IO monad as one of the libraries.

So, if one were to query main and parsePoem, one would find that they are the types IO () and IO [Char], respectively, meaning that main interacts with IO, returning "nothing", and parsePoem interacts with IO, returning a list of characters (the poem that you just finished reading aloud).

In short, the IO monad does all the handshaking (or, more correctly, handwaving) with the state of the world, so the programmer can do something as "simple" as printing "Hello, world!" easily. All this monad asks, as every monad asks, is "Pay no attention to that man behind the curtain."

Monday, April 21, 2008

God in the Church

Last week, Fr. Trong gave a homily from the readings, focussing on the baptizing of the 3,000 in one day and the parable of the sheep gate. Now, I look forward to Fr. Trong's homilies, unlike some people, because the level of effort that he puts into each phrase and the depth of knowledge of the Gospel he gives. These homilies are studied, deliberate, affairs, and I appreciate the care for us that he shows in this labor.

Many moons ago I had heard a homily on the sheep-gate. Do you know what a sheep-gate is? It's the gate that keeps sheep in the pen, and wolves at bay. Well, 2,000 years ago, the shepherds didn't use wrought iron fences and gates, they used stone for the fence and their own bodies as the gate. That way, an adventurous sheep (of which there would be fewer over time) would wake the shepherd, but the only thing going over the walls would be predators. Clever, no?

Fr. Trong related these two stories as to our rôles today. The parishioners are the sheep and the priests of the parish, the shepherds. The sheep know the shepherds just as the parishioners and priests of the parish know each other and look out for each other, just as the early disciples in St. Peter's time. Anything or anyone else is a wolf, even if it's wearing sheep's clothing. So, Fr. Trong concluded, the laity should stay in their parish.

I can see Fr. Trong's point, and agree with 99% of it. Most parishioners should stay in their parishes and learn from their parish priests, because this arrangement is edifying for most parishioners.

But let's look at the history of the Church. Does growth toward God come from within the bureaucracy and the magisterium? Or, does it come from without? This is an important question, especially in light of the most visible symbol of authority, Pope Benedict, to this country. Well, let's examine how the Church has grown throughout history.

  1. Abram, an idol-maker's son, intuited the the presence of God from outside the prevalent pan-theistic cults.

  2. Jesus, the most perfect Jew ("Not a jot or tiddle" [Matt 5:18]), supplanted the prevailing Rabbinic monotheistic rule of the Pharisees and Sadducees with the worship of a Triune God

  3. St. Peter and St. Paul (né Saul), interpreting Jesus' words and acts, expanded God's people from the Jews to include the Gentiles

  4. St. Augustine, Doctor of the Church, notorious for his licentious youth, did some studying outside of the Torah: he was heavily influenced by Platonic works, and used that influence to make Christianity accessible to the Western/Latin world.

  5. St. Thomas Aquinas, Doctor of the Church, the Arc of Salvation, used the philosophical methods of the heathen, Aristotle, to explain and to prove rigorously the tenets of Faith

  6. St. Jeanne d'Arc was burned at the stake by the Church authority for following the voice of God

  7. St. Thérèse de Lisieux had to petition the Pope, against the vehement objections of her parish priest and bishop to enter religious life.

So, with the exception of 3. above, if we were always to listen to the local church authority, we would still be:

  1. idol-worshipping pantheists

  2. Jews awaiting the Messiah

  3. n/a; church authority got this one, as it got many others, right

  4. idol-worshipping pantheistic slaves of Roman rule

  5. dumb servants of faith, disallowed to use our God-given gift of reason

  6. less French??? Okay, I don't have a good counter-example here, but I'm sure something along the lines of nationalism, patriotism and self-governance can be thought up here by someone quicker than myself

  7. unable to see God in the "Little way"

Again, let me stress, that following the teachings of the Church and Church authority in their parishes is what makes Catholics, Catholic and is one of the pillars of the Church. But, the Church body is composed of fallible human people organized by bureaucracy, and, as such, may occasionally fall into error, may entrench into established ways for dogmatic, not salvific, reasons, and may react to perceived outside threats with a hand that is sometimes too heavy (just ask Galileo about that). When God knocked Saul off his horse, no concerns of the local church were going to stop his new evangelization. God calls each of us, but that calling, for most of us, is to follow our vocation in our homes and in the pews to be edified by the local parish priest, but when God calls in a particular way, it is for the overall good of the Church, and not objections, nor even martyrdom by the people in that Church can stop God from carrying out His Will to bring the People of God to Himself.

Sunday, April 6, 2008

Trivial Monad solutions (cont.)

This is a continuation of the previous entry on the subject, with the solutions for exercises 3 and 4. But, we are now finally and formally introduced to the Haskell-style bind operator: >>=. What >>= (which is pronounced: "bind") does is to give the value the monad contains to the function that then uses it. More formally defined (specific to sigfpe's trivial monad example):

W a >>= f == f a

An example of the syntax (again, following sigfpe's post) is as follows:

W 5 >>= f

(where f was defined in sigfpe's post as an increment function for his trivial monad: f x = W (x + 1)). The above example would result in the (monadic) value W 6. So, now you see >>= (trivially) in action, and sigfpe asks us to put it into use for the following exercises.

The first exercise that we'll continue on in this entry is exercise 3, which is to prove the three monad laws:

Left Identity: return a >>= f == f a

or, a monadized value bound to a function is the same as applying a function to the plain value.

Right Identity: m >>= return == m

or, a monad bound to return is just that monad, again (or, the identity function for monads is bind return), and finally:

Associativity: (m >>= f) >>= g == m >>= ((λ x . f x) >>= g)

which is similar in spirit to associativity for addition, i.e.: (a + b) + c == a + (b + c).

Just a side note here, some people become discouraged when they are told "to prove" something, whereas if they are asked "to solve" a problem, they turn right to it. Fogel and Michalewicz rightly point out that proving and solving are basically the same process!

My cara spoza asked me, as I was attempting to (re-)prove liftM2 (+) for exercise 2, "Is proving difficult?" The question gave me pause: "Well, sometimes the simplest proof can elude me for days, and sometimes something that appears much more difficult comes in a flash."

So, let's prove each of these laws in turn, using the definitions of >>= and return as axioms.

First up, Left Identity:

Prove:return a >>= f ==f a 
1.return a >>= f == W a >>= f return definition
2.  == f a >>= definition

Next, Right Identity:

Prove:m >>= return ==m 
1.m >>= return == W a >>= return monad identity
2.  == return a >>= definition
3.  == W a return definition
4.  == m monad identity

And, finally, let's prove associativity. We'll start out by reducing the left hand side ("lhs") to something simpler:

Simplify:(m >>= f) >>= g  
1.(m >>= f) >>= g == (W a >>= f) >>= g monad identity
2.  == f a >>= g >>= definition

... but it was here that I became stymied, for I didn't see how to transform f a >>= g into something resembling the right hand side (rhs), which is m >>= (λ x . f x >>= g). Do you? Give it a try!


The thing to do, when one becomes stuck solving or proving something, is to acknowledge that fact, and then to try something entirely different. One way to try something different is to rework the simplification of the lhs so it assumes a form much closer to what the rhs is. I didn't see this approach to be fruitful, however: I reduced the lhs to be something pretty simple, so a rework would make the lhs more complicated -- clutter often obscures correlation. Another approach is to leave the lhs be and work on simplifying the rhs -- maybe a simplification there would yield an useful correspondence. Let's do that:

Simplify:m >>= (λ x . f x >>= g)  
1.m >>= (λ x . f x >>= g) == (W a >>= (λ x . f x >>= g) monad identity
2.  == (λ x . f x >>= g) a >>= definition
3.  == f a >>= g β-reduction

Ha! Now we see that the simplification of the lhs is identical to the simplification of the rhs. That was easier than expected.


To summarize this exercise, proving something or solving something is really a rather simple task: using the tools you have to match the solution to the problem. The fun part comes in when you discover new ways to use the given tools, or, finding out the given tools aren't sufficient, which means finding new tools to solve the problem. Once you have your tools, along with the problem statement and desired outcome, it is often the case that there is a simple walk from the problem to the solution, the solution to the problem, or both meet somewhere in the middle. Odds are, you are not the first one solving a problem, if you need help, there's a whole big world out there -- someone probably has already left the solution out there for you to find.

Exercise 4: Monads cannot be unwrapped completed back to the plain value they carry, but if a monad is layered, that is a monad contains a monad, it is possible to remove one or more layers. join is such a function that does this "unlayering". Given a layered monad mm = W (W x), join mm will return the value W x. The declaration for join is:

join :: W (W a) ⇒ W a

Define it.

Solution 4: Well, the declaration for join looks suspiciously like an application of >>= (you are pronouncing that operator "bind", right?), as it hands the value carried by the monad for function application.

join mm = mm >>= ?f?

The question then becomes "what function to apply the carried value to?" The problem is that >>= has already given us the answer, W x, so we just wish to have join return that value unaltered. Fortunately for us, there is such a function from combinatory logic called the identity combinator, I, which has the form λ x . x, and Haskell already has that definition (it is named id), so we simply use that for the hole in the above definition:

join mm = mm >>= id


In conclusion, I hope these entries have helped you to see that monads are actually a rather simply concept that are easy to use. This is just the tip of the iceberg, however: monads are used pervasively in Haskell, tackling a great many tasks. May I suggest the excellent and practical tutorial on Monad Transformers? Please do not be put off by the imposing front page to the paper, as this tutorial shows how monads and their transformers shine, giving a step-by-step introduction of new functionality of monads into a plain-vanilla system.

Saturday, April 5, 2008

Trivial Monad solutions

Okay, a monad is a mathematical form, which sigfpe describes in his blog. He then proposes a series of exercises [reminiscent of my own exercises in combinatory logic], and even provides solutions. I came up with quite a different solution set, as I used his entry, and his entry only, as the source material.

A recap of monad: a monad is an opaque decorator. It takes a thing (including taking nothing) and hides that thing behind the monadic form. In sigfpe's example, the monad, W, takes a number (or anything else, for that matter), consuming it.

A monad, then, is like a katamari or a black hole, ... or Kuzko: no touchy!

So, then, the question naturally arises: how does one interact with the captured data? Well, the monad interface provides two basic operations in its protocol: return and bind (sometimes written >>=).

  • return "lifts" an object into the monadic form. In propositional logic, return is a ⇒ M a -- 'return 5' would yield 'W 5' in sigfpe's example.

  • bind takes a function that transforms an object to a monad and converts that function that transforms the object within that monad. In propositional logic, bind is (a ⇒ M b) ⇒ (M a ⇒ M b)

So, bind is used frequently when working with data and their monadic forms, and from it other functions may be derived, such as fmap, which maps (converts) a function that works on "ordinary" data to one that works on monads ... fmap is (a ⇒ b) ⇒ (M a ⇒ M b). Note the closeness to bind: the derivation of fmap is λ x . bind (return • x) (where is composition (the B combinator, or Bluebird, from combinatory logic), and λ introduces a variable in the expression).

The above is enough introductory material to solve sigfpe's first two exercises, which I reproduce here:

Exercise 1: Write a function, g, that adds a number to a monad containing a number, returning a new monad with the sum, or g :: a ⇒ M a ⇒ M a.

Solution 1: well, given that addition is of the form (+) :: a ⇒ a ⇒ a, and that fmap lifts an ordinary function to a monadic one, the solution nearly writes itself:

g x my = fmap (+ x) my

What is simply done here is to construct a "plus x" function by applying x to the addition operator -- note that (+x) :: a ⇒ a. We then lift that function to its monadic equivalent with fmap and apply the result to the number (y) contained in the monad (my). Q.E.D.

Exercise 2: Write a function, h, that adds monadic numbers, returning a new monad containing the sum, or h :: M a ⇒ M a ⇒ M a

Solution 2: This provided a spot of bother for me. Whereas in the last solution, one could take the first number and apply it to addition to come up with a function that fit the form of fmap, in this case, we don't have any numbers with which to simplify addition to a form we can use, as they are all rolled up into monadic forms.

I described the problem thusly to my daughter, EM.

Me: You know addition, right?
EM: What's that, Papa?
Me: What's one plus two?
EM, immediately:Three! ... and ten plus ten is twenty! ... and twenty plus twenty is thirty! ... and thirty plus thirty is forty!
Me, clearing my throat, letting the errors pass in light of the delighted exuberance with which she delivered her answers: Um, yes, well. Here's my problem, let's say Thing One has a number hiding behind its back, that it won't show you, and Thing Two has a number hiding behind its back, too, that it won't show you, and you're asked to add those two numbers and give the answer to Thing Three. How do I do that?
EM, rolling her eyes: Papa, there isn't a Thing Three in the Cat in the Hat!
Me: Yes, well, suppose there was; how would I do this?
EM: Can you peek behind their backs to see what the numbers are?
Me: No, you're not allowed; besides, those Things are much too fast to run behind them, remember how they ran through the house with kites? ... So, we must use something like the fmap-box-maker, because addition is a box that takes numbers and yields numbers, but we need a box that takes Things and yields Things.

Using fmap does work its way into the solution I eventually derived, but another important player for this solution is also bind used in conjunction with a technique of deferred computation.

You see, just like the solution for exercise 1, we'd like to apply a number to addition to obtain a function for use by fmap. The problem is that we have no such number (just monadic values), but this is where bind is of use. Recall that bind :: (a ⇒ M b) ⇒ (M a ⇒ M b). The standard given explanation of bind is that it transforms a function that lifts ordinary data into monads to a function that works entirely in the monadic domain. But an entirely different way of looking at bind is that it treats a given monad like ordinary data as input to the lifting function. εὕρηκα! [No, I'm not about to hop out of the tub and run around town with my towel flapping in the wind]

The latter usage is the only one for those using >>= (which has the English language pronunciation of "bind", as well, confusingly), but if we take sigfpe's entry as the sole and base source, this "discovery" is quite the epiphany. And with that understanding, along with some deferred computational hand-waving, the solution then, again, presents itself:

h w1 w2 = bind (λ x . fmap (+ x) w2) w1

Working from the parenthetical expression outward, we again see the fmap (+ x) form, unaltered from the solution to exercise 1, giving us a number-to-monad function. The catch here is that we have no such number, so we give a promise that there will be such a number eventually with the λ-introduction. We then immediately fulfill that promise with bind: we apply the number-to-monad function to w1, treating that monadic value as if it were a plain number (that's what bind gives us permission to do), rendering the solution. Q.E.D.


I obtained solution 1 almost immediately by matching the logic of fmap to that of the problem description. It seemed to me to be rather trivial, so I was taken aback at the lengths and "arcane-ness" that sigfpe put into his solution.

On the other hand, sigfpe merely needed to augment his first solution with variable introduction to obtain his second solution. His second solution was trivially-obtained and obvious, given his approach to the first solution, whereas I had to struggle quite some time -- introducing both λ-abstraction (as sigfpe does) and bind (which sigfpe already had in the alternate syntactic form of >>=) -- to obtain mine.

In my defense, my solutions use only the monadic tools introduced in sigfpe's blog entry. On the other hand, Haskell programmers accustomed with monads rarely use fmap, regarding it as esoteric or superfluous, and instead rely on >>=, or inline binding entirely with the more "natural" do-notation (although there has been some grumblings against it), so in the light of that experience, sigfpe's solutions are natural and the first way most Haskell programmers would solve these exercises. This entry here, then, is from the perspective of someone not familiar with Haskell's approach to monads deriving the solutions from the principles of the provided function bind and its derived sibling fmap as discussed in sigfpe's blog entry.

Please see my next post for the solutions to sigfpe's exercises 3 and 4.

Wednesday, April 2, 2008

God Be With Ye, Joel

My friend, Joel, just died this week, suddenly, leaving my friend, Sandy, a widow. The Apostle Paul writes: "God is no respecter of persons." [Rom. 2:11], but I certainly am, and there are very few people in this world that have earned my respect -- Joel was one of them. He and I could not have been any more different: his introspection foiled my exuberance, his patience, my wrath, his acceptance, my judgements. I think it boiled down to, essentially, his Irishness to my Italianness. Yet, as Sandy knows best, these two seemingly opposite characters did not clash, but deepened our friendship, and my respect for him over the 18 years that I've known him.

We all know the story that the Irish never say a bad word about the dead: a lady asked to speak on behalf of a man, universally reviled, who just died, struggled for a moment, then burst into a smile as she pronounced: "But he had a brother who could sing." For Joel, I have the opposite problem: I cannot recall one moment where he ever lost his cool or exposed himself to censure. For Joel, everything was alright or would work itself out, and he was there, doing his part to make that so. He was always warm in his greeting, in his inquiry, and took your causes, making them his own.

For example, it was Joel who got me on my feet after I left the military with my first job working at his company. He tried me in one role, working for him in marketing, but seeing I was more fit for computers and software, quickly shifted gears and championed me to the IT managers. He did this over and over again for so many people he came across -- he truly liked to help people, he didn't particularly care that this or that person did or didn't fit Joel's assigned position, he just helped to find the best fit for the person he was helping, regardless of his own benefit. I have rarely seen this egolessness in service: when I saw Joel doing this I saw it come from a rock-solid foundation of faith. He knew he was doing his part to help things along, so he had not one concern about the benefits to himself.

He was introspective and studious by nature -- his most natural pose was one of repose, with a book in his hand. But, at the same time, he enjoyed company. At a business function several years ago, Joel opened his house to me and three other young men for a lunch, prepared by Sandy, that I will never forget. The food, of course, was a very welcome respite from the starched business setting, but what I took away with me was Joel's absolute peace and comfort. It seemed to say: "This is my house, this is my home: I'm comfortable in it, and I'm comfortable sharing it with you." Our dessert was an Irish coffee Joey lovingly hand-measured that I have not tasted the like before or since. Joel was a stately craftsman: proud of his work, and pleased to share it.

I shall end with this: his service and conduct were a credit to himself and his family, and we sorely miss him. God bless you, Joel, may you rest in the company of the saints.