What is a monad? And who needs Haskell anyway?

Ah, OK, thanks. I had never written a "where" clause until yesterday or had any idea why I might need one.

OMG. It really is like my comparison to structured programming. I thought I was, at least partially, joking. Having put all the restrictions in place in you new language you end up having to find a way to make "goto"!

Ah, sorry- I wasn't really trying to define "monad" there, but describe what monad-using code looks like. It seems the comparison to Future combinators and Node.js didn't really land at first, either, but you figured that out.

That callback-based style is an important piece of how monads are used, but that's not the whole picture. Monads only come into this when you use that style as the sole mechanism for accessing the "contents" of a Result/Option/Future/Iterator, via a Monad trait that all these types implement, with a flat_map (or and_then, or bind, or >>=) method.

(This is where the M: Monad syntax comes in. Rust uses A: B syntax not only to specify a value A of type B, but also a type A that implements trait B. And traits are not types.)

(This is also where the structured programming analogy comes in. The fact that flat_map takes an argument representing "the rest of the function" is what lets you implement goto- Just keep that value around until you want to execute a goto, and then call it!)

The point I was trying to make before was that monads aren't the trick that enables this. You can do useful pure functional programming without monads, and you can use monads for a lot more than "making pure functional programming useful."

Instead, the key is to (mentally) replace "mutate the environment" with "take the current state as an argument and return the new state." (Please hold all questions about "efficiency" and "but how do you make a copy of the universe" until the end, I promise I'm not ignoring this!)

For example, you can implement a state machine as a pure function like this:

fn state_machine(current_state: State, next_event: Event) -> State {
    match (current_state, next_event) {
        (_, Event::ButtonPressed) => State { button_pressed: true, ..current_state },
        ... handle more events here ...

No mutation, no I/O, but you've still fully described the state machine! Then you can imagine a tiny shell, baked into the runtime of your pure functional language, that does nothing but read events and pass them into the state machine function:

fn run(current_state: State) {
    let event = wait_for_event(); // impure
    let next_state = state_machine(current_state, event); // pure!

Okay, back to efficiency. There are a lot of options here, to avoid problems like "oops I need to clone my entire database just to insert a row."

  • One is to do a lot of structural sharing- when you produce a new state value, reuse all the pieces of the old value that didn't change. This is easier with Rc or a garbage collector, and more common in pure-functional-first ecosystems.
  • Another is non-copyable values- if State is !Copy, then the compiler can produce assembly for state_machine that just mutates current_state in-place, because nobody else can possibly be using it.
  • In fact, as a special case of the above, &mut references can be viewed as syntactic sugar for the state passing+returning pattern!
1 Like

An example of efficient computation with a pure language is Sisal, which was developed at Livermore Labs for scientific computing. The semantics of updating a single element of a matrix is that the whole matrix gets copied. In spite of that, Sisal programs ran at 95% of the performance of equivalent Fortran programs. The secret was that the compiler could see that almost all the time the old copy of the matrix was never used again so the compiler could make the update in place.

1 Like

Pretty much. You invoke a function and say "when you're done, invoke this closure so I can continue doing things". If you've used methods like then() and map() on Futures you've used CPS. Although normally this closure will only be invoked once and as the very last thing in a function.

Your comparison with goto isn't too far off the mark. Where an imperative language might have one block of code on the true branch in an if-statement and another block in the false branch, you might pass in two closures where one is invoked for true and another is invoked for false. The difference being this goto can contain state, outlive the function it came from, and be passed around as a value.

It's just a different pattern you can use when writing code which may be easier than the alternative for certain scenarios.

Once you understand something and have internalised it, it becomes hard to put into words because it's just common knowledge for you. Kinda like explaining the concept of a variable,. interfaces, or control flow to a beginner.

@ZiCog I like to think about monads as some kind of "environment" for computations. In particular an environment that is "viral", i.e. once you are in that environment, you don't get out again easily.
For example:

  • It's easy to create a Future<T> from a T with future::ready, but to evaluate a Future<T> and get the resulting T you have to start an entire runtime. async functions are usually only called in other async functions.
  • If you call a function that return io::Result<T>, your function usually also returns some kind of Result s.t. you can use the ? to propagate error.

The important point about a monad is that if you have some value of type T but "wrapped" in a monad (i.e. Future<T>, io::Result<T>), you can extract the inner value trivially if you are using it in a computation that eventually also returns a value wrapped in the same monad. That way the "state" of the monad can propagate from one value to the next.

Now, if a programming language knows the concept of a monad, it can provide a special syntax for computations inside a "monadic" environment. Much like the special syntax for async/await with futures or ? for Result or Option.
In other words, it can have a single unified syntax for async/await and ?, but also any other monad that you write yourself.

1 Like

I lived for two years in a tent, wandering around the desert. Whenever I needed to clean the interior, I used a flap_mat method. I guess that made me a nomad.


Which reminds me. What on Earth is a flat_map? And why would I need one?

I can find some kind of definition of "flat_map" in C++/Boost and Ruby. The descriptions of which don't connect to anything much in my mind.

Rust has a "flat_map". Which:

Creates an iterator that works like map, but flattens nested structure.

Apparently it allows one to iterate over over a list of stuff, where stuff is a list of stuff and that is a list of stuff... all as if it was a "flat" list of whatever the bottom layer of "stuff" is.

Which I guess saves writing nested iterators, or God forbid "for" loops.

That is a cool trick. Not sure I ever felt the need to do that.

And what has that to do with the mysterious "monad"?

I'm determined to catch one of these mythic beasts, in a page or so of code, so that someone can draw a red circle around it and say "There, there is a monad".

1 Like

I guess if Rust were able to express the monad typeclass (trait), it would look something like this

trait<T, F, U> Monad<T>
    F: Fn(T) -> Monad<U>
    fn bind(self, f: F) -> Monad<U>;
    // ... other stuff

// Every type has to provide a specific implementation of `bind`
// that makes sense for the specific type. The implementation must obey
// the monadic laws.

// Here is how `Vec<T>` would implement `bind`.
impl<T, F, U> Monad<T> for Vec<T>
    F: Fn(T) -> Vec<U>
    fn bind(self, f: F) -> Vec<U> {

Being a monad requires implementing a bind method (which satisfies some laws). What this does varies from monad to monad, according to what the behavior needs to be (implicit passing of state and sequential computation for IO, short-circuiting for Option/Result...). The implementation for Vec is just flat_map, because that is the monadic behavior of a vector (see Wikipedia for instance).

Maybe, what might help you understand is to realize that "monad" is not any specific data structure, it's a class of objects that respect some structural property. "Monad" lives in the same universe as semigroup, monoid, group, vector space, ring, algebra, module, field, etc... It's a class of objects with some operations that respect some properties.
The integers with the addition are a group, the vectors with a specific implementation of join/bind are a monad.

1 Like

flat_map can be useful when for each input value you create a generator and you want to collect the output of those generators. I have never used it in anger, only in some toy tools that spawned a new worker for each input filename and each worker generated a list of keywords that I wanted to collect in a single set.

It sounds to me like you're running into some of the same conceptual issues I did when I tried to understand monads for the first time. For me, part of the issue was that I was trying to think about them in terms of a language which lacked the power necessary to create the abstraction of a monad.

That seems really abstract but (hopefully) this will help you or someone out there. Because the languages I knew couldn't express it, I had trouble understanding it. For just a minute, think about a language that lacks templates/generics (I will pick on C# 1 here). I can write a list of strings (implementation details omitted for brevity)

public class StringList 
    public void Add(string value);

    public string this[int i] { get; set; }

    public string RemoveAt(int i);

and I can also write a list of integers:

public class IntList
    public void Add(int value);

    public int this[int i] { get; set; }

    public int RemoveAt(int i);

but I don't have any way to abstract over these two lists in a completely type-safe way. The best I can do would be to use object everywhere and add runtime casts. So in terms of C# 1, there's no way to express just the concept of a List itself, you can only express specific instances of that List: StringList, IntList, etc. So while you or I could talk about List in the abstract, there's no way we can do so with actual code. But we both understand what a List looks like and we can use that knowledge to implement different concrete List instances correctly according to how Lists behave.

Of course with C# 2, we got generic types and so this became trivial:

public class List<T>
    public void Add(T value);

    public T this[int i] { get; set; }

    public T RemoveAt(int i);

And now the language has enough abstract power to talk about List correctly. We can use this to implement functions that care only about List but not exactly what is inside the list. For example:

public bool IsEmpty<T>(List<T> list);

public int BinarySearch<T>(List<T> list) where T: IComparison<T>

etc. The problem though is that even though we have added all of this expressiveness to the type system, we still lack the power to describe Monads in the language. We can of course create whatever concrete instances of them we want:

public class Option<T>
    public Option<U> Map<U>(func: Func<T, U>);

    public static Option<T> Empty { get; }

public class List<T>
    public List<U> Map<U>(func: Func<T, U>);

    public static List<T> Empty { get; }

and while we look at these types and see a similarity there, we don't have any way to express that similarity within the bounds of the C# type system. The thing we're missing is that we have no way to talk about or use a generic type T which is itself generic. We want to be able to say something like

public TMonad<U> Map<TMonad<_>, T, U>(func: Func<T, U>) where TMonad: Monad { ... }

When calling this function, you would use a generic type for TMonad like List (which is still generic!):

List<string> strings = Map<List, int, string>([1, 2, 3], i => i.ToString());

If generic types let you abstract over a type used in your class or struct, then this goes one step further and lets you abstract over the generic version of that type. This is called higher-kinded types.

Neither C# nor Rust have that feature and without it, you can't write the definition of Monad in those languages, you can only write concrete instances of it (which themselves might still be generic) much like how without generics, you can't write the definition of List, you can only write instances of it.

However, just because we can't write that definition today, doesn't mean there isn't value in talking about Monads because they provide a guide to implementing concrete instances of them in our language.

@FedericoStra provided a great answer about what Monad might look like in Rust hypothetically so I won't recreate that code here.

I will say though that a lot of the "monad functions" like map, flatMap, fold, etc tend to be used to deal with wrapping or unwrapping values from monads:

  • map takes a function that lets you use the value inside a Monad as if it were just a regular T value. If that doesn't sound useful, think about how ugly your code would be if you had to match on every Option everywhere or always use for loops instead of the Iterator methods.

  • flatMap takes a function that converts the T value inside your Monad into a U value inside your Monad.

    • For Option, that means fn flat_map<T, U>(opt: Option<T>, f: impl Fn(T) -> Option<U>) -> Option<U>. Aka, now you can run an operation that can return None inside your map but instead of getting an Option<Option<U>>, you just get an Option<U>.

    • For Iterator, that means fn flat_map<T, U>(iter: Iterator<Item = T>, f: impl Fn(T) -> Iterator<Item = U>) -> Iterator<Item = U>. Aka, now you can run an operation for each element of an iterator and return 0, 1 or more items and instead of getting an Iterator<Item = Iterator<Item = U>>, you just get an Iterator<Item = U>.


You can look at these different instances of those functions and, if you squint a bit, you can see how they behave similarly even though Option and Iterator are pretty different things. That similarity is essentially what Monad is.


After playing around a bit I realized that in Rust we can express a trait which is a relaxation of monad.
What is missing is that the trait Monad cannot enforce any relationship between the type implementing it, T and R. For example, in the implementation for Vec<T> there is no way to impose that R == Vec<U> and in the implementation for Option<T> I cannot impose that R == Option<U>. It's my responsibility to implement the trait with the correct return type, whereas in Haskell you cannot do otherwise.

// If `C<T>` implements `Monad<T, R>`,
// then we must have `R == C<U>` for some `U`
trait Monad<T, R> {
    fn bind<F: Fn(T) -> R>(self, f: F) -> R;

impl<T, U> Monad<T, Vec<U>> for Vec<T> {
    fn bind<F: Fn(T) -> Vec<U>>(self, f: F) -> Vec<U> {

impl<T, U> Monad<T, Option<U>> for Option<T> {
    fn bind<F: Fn(T) -> Option<U>>(self, f: F) -> Option<U> {

fn main() {
    let f = |s: &str| {s.chars().collect()};
    let v = vec!["Federico", "Stra"];
    let w = v.bind(f);
    println!("{:?}", w);
    let f = |x: i32| Some(x+1);
    let none = None;
    println!("{:?}", none.bind(f));
    let some = Some(41);
    println!("{:?}", some.bind(f));
1 Like

What makes Monad useful in Haskell isn't the definition itself, but the language and library support that comes with it.

So of course it's going to look pointless in Rust—if our Iterator trait had no helper methods, no implementations, and no for loop support, then it would seem pretty pointless as well :stuck_out_tongue:

1 Like

I agree. The trait Monad above allows to be polymorphic over implementations of bind, but you can only use as "single-shot". It's impossible to compose these monads because it's impossible to express in the type system the constraints on the return type.

The best example that made me understand the Monads was the watch analogy.
The result of a watch is of type time and changes everytime you look at it but the watch itself is a pure type which is Watch so you can pass Watch around as a pure haskell type but you cannot read the value until you look at it at runtime however you can still map the value at compile time.
I hope this helps you understand better the idea


Thank you! Good analogy of IO and laziness

Monads didn't have anything to do with (im)pureness or state.
They are just a framework to extract elements out of a container.
You can do that manually, too, but you would repeat yourself to access elements every time over getters.
Note, that access over getters vary from collection to collection.

Instead, the idea is to implement the access of elements over those specific getters for a collection once and then only call the monadic operator <- to access elements in a uniform way for every container conforming to the monad interface, no more access over vec[i], list.get(i), if let Some(a) = var else ... .
Monads strongly overlap with ranges and transducers.
But they are harder to understand than ranges because they do more.

In detail, they have a bidirectional workflow, they translate to nested for loops creating lists of computed values (first direction to the innermost for), if an inner for loop is done, the resulting list get flat mapped by the enclosing outer for loop (second direction to the outmost for).

Further, to extend the idea of abstracting over every branching+access you are required that monads compose which they don't. Because of this you are required to write monad transformer which become damn hard to understand.

So nice as monads seem to be, they didn't get mass adoption because of inherent complexity. It seems that traditional control flow represents the way we think more than monads.

1 Like

Like @ZiCog, I'm of the older generation and find a lot of this newfangled stuff confusing. Recently, a friend explained monads in a way I can almost understand. The most important feature for me is that the monad lets me chain operations containing Option or Result without needing to worry about None or Err values. Here's a silly example.

fn main() {
   let v = vec![Some("1"), Some("2"), None, Some("x")];
   let new_vec: Vec<_> = v.iter()
        .map(|i| i
            .and_then(|i| i.parse::<i32>().ok()
            .map(|i| i+42)))
    println!("{:?}", new_vec);

It prints [Some(43), Some(44), None, None]. The and_then() and map() monads on Option automatically take care of the None elements.


Pretty much. It lets you turn a Vec<Vec<T>> into a Vec<T>.

As an example, imagine you are calling an API which uses pagination. Every call to the API will return a list of items, so if you wrote something like (0..num_pages).map(|page_number| get_page(page_number)) you could use flat_map() to flatten that all down into a single Vec.

Sure, you could use a for-loop and Vec::extend(), but sometimes the functional version ends up being more ergonomic or performant.

wow I am not the only one not to be clear about what a monad is :smiley: almost everyone puts it differently :smiley: I am relieved :smiley:

1 Like

Here's yet another attempt to demystify the why of monads, but not actually explain them. Sorry, if the misunderstandings this is attempting to clear up are not the actual reason for folk's underlying struggle with monads.

First, I'll start by echoing others:

So to summarize:

  1. The concept of a monad doesn't give List or Option their powers to flat_map or to automatically handle Nones. Those are their inherent powers provided to them by their methods. These inherent abilities allow them to be classified as monads.
  2. Like everything in abstract algebra, monads are a pattern. The talk of monads only really becomes relevant when we ask whether the language's type system allows you to take advantage of this pattern to, e.g., DRY out the code. Rust's type system does not at the moment (more on that at the end).

So perhaps let's focus on very familiar patterns that rust does allow us to generalize over, and not talk about monads just so that everything feels extremely familiar.

A familiar pattern: In many programming languages, a very common pattern we see with some binary operators (integer addition, integer multiplication, string concatenation, list concatenation) is the following:

  1. When you apply the operation to two instances you get a new instance of the same type (e.g. 1+2 is also an int, concat("a", "b") is also a string.
  2. The operation is associative: e.g. a + (b + c) = (a + b) + c
  3. There is an identity element: e.g. 0 + a = a + 0 = a

Any pair of (type, operator) that satisfies these three properties is called a ...sigh... monoid1, because mathematicians haven't historically taken the most difficult problem in computer science seriously enough. Examples:

  • (int, +) - identity is 0
  • (int, *) - identity is 1
  • (string, concatenation) - identity is an empty string
  • (list, concatenation) - identity is an empty list

In Rust, we can abstract monoids with a trait2:

trait Monoid {
  fn id() -> Self;
  fn op(self, other: Self) -> Self;

You can easily define this for (i32, +):

impl trait for i32 {
  fn id() -> Self { 0 }
  fn op(self, other: Self) -> Self { self + other }

and for (String, concatenation)3

impl trait for String {
  fn id() -> Self { 0 }
  fn op(mut self, other: Self) -> Self { self += &other; self }

This trait now allows us to write extremely general methods that abstract over all monoids. E.g.

fn monoid_em_all<T: Monoid>(list: List<T>) -> T {
  let mut res = T::id();
  for val in list {
    res = res.op(val);

And now this single function can add all integers in a list or concatenate all strings in a list. Or whatever the relevant equivalent is for any monoid. And this helper is reusable in any generic context where we know that T: Monoid. DRYness galore.

All very familiar, I hope?

Monads: I don't think it will be helpful to define monads here. What is important is that Rust's type system does not allow us to abstract over properties like monads. Why? Because:

In other words, there's no way in Rust for us to say

impl Monad for Vec { ... } // Look, Ma, no `<T>`!

Vec<T> is a type, but Vec is a type-constructor—it takes a type T as an argument and produces another type Vec<T>4. Being able to abstract over monads (like we abstracted over monoids above) enables a very nice do {} notation in Haskell, which simultaneously supports:

just because the two corresponding type constructors implement the same abstraction—monad! Such is the power of higher-kinded abstraction. Now, if you actually want to concretely comprehend this, then:


So what now?

In the end, what matters in rust isn't whether or not we define a monad trait, but whether we can. Support for such higher-kinded abstractions is typically brought up in discussions about rust support for either:

  • abstracting over higher-kinded types, or
  • associated type constructors (which offers the requisite abilities, possibly at the expense of poorer type inference).

El Fin


  1. This really nice table shows its relation to other tragic epithets of abstract algebra. If it isn't abundantly unclear, I don't ...uh... lava the terminology.
  2. This is lie, or, at least, not fully general. I'm brushing under the rug that you can only implement this for ints once, even though there are clearly at least two different monoids for ints. Supporting both is possible with the classic rust double-dispatch trick, but that's tangential to the narrative.
  3. This is clearly inefficient since we would want to take other as a &str instead of requiring the caller to pass ownership of an allocated value...but then this wouldn't be a mathematical monoid. A rustier variant of monoid can be defined by using other: T where T: ToOwned<Self> instead.
  4. Vec is thus a "type-level" function of kind * → *, where each * represents a concrete type. So monads also tend to appear in discussions espousing rust support for abstracting over "higher-kinded types" to support defining and implementing "higher-kinded traits" like monad.