Monads are a remarkably powerful tool for building specialized programming languages. Some examples include:

But there's a bunch of things I don't understand about monads. In each case, my confusion involves some aspect of the underlying math that "bubbles up" to affect the design of specialized languages.

A "commutative monad" is any monad where we can replace the expression:

``````do a <- ma
b <- mb
f a b
``````

...with:

``````do b <- mb
a <- ma
f a b
``````

...without changing the meaning. Examples of commutative monads include `Reader` and `Rand`. This is an important property, because it might allow us to parallelize the commonly-used `sequence` function across huge numbers of processors:

``````sequence :: (Monad m) => [m a] -> m [a]
``````

Simon Peyton Jones lists this problem as Open Challenge #2, saying:

Commutative monads are very common. (Environment, unique supply, random number generation.) For these, monads over-sequentialise.

Wanted: theory and notation for some cool compromise.

Many complicated monads break down into a handful of monad transformers, often in surprising ways.

But composing monad transformers is a mess, because they interact in poorly-understood ways. In general, the following two types have very different semantics:

``````FooT (BarT m)
BarT (FooT m)
``````

If `FooT` and `BarT` commute with each other, however, the two types would be equivalent. This is helpful when building large stacks of monad transformers.

Chung-chieh Shan encountered a related problem when applying monad morphisms to build a theory of natural language semantics:

It remains to be seen whether monads would provide the appropriate conceptual encapsulation for a semantic theory with broader coverage. In particular, for both natural and programming language semantics, combining monads---or perhaps monad-like objects---remains an open issue that promises additional insight.