Announcing Failure

I've got a new crate out on called failure, which is intended to make it easier to manage your error types. Feel free to read this blog post & the documentation, and let me know what you think if you try it out!


Still reading but your blog looks amazing! :slight_smile:


  • It's not clear just from the post why I'd use failure instead of error-chain. A direct comparison would be really helpful.
  • Thanks for targeting stable and not nightly.

Judging by the example code this crate looks really useful!


Sorry, I couldn't resist this pun: Most people wouldn't brag about their failures so much :wink:
(Though in the spirit of blameless culture, we probably should do it more! Share the learning!)

On topic: thanks! It sounds like an interesting successor to/extension of error_chain. I'll be sure to look into it more!


I've got a fairly mature project that makes heavy use of error_chain. After hearing your talk on Failure at the meetup, I thought I'd give Failure a try. So far, I've failed. I get

let mut fail: &Fail = e;
    		      ^ the trait `std::error::Error` is not implemented for `failure::Error`

no matter what I try.

I've attached what I think are the relevant code snippets. Note that the program compiles if run() returns Result<(), BlueprintError> instead of Result<(), Error>.


git = ""
git = ""

but I also tried

failure = "0.1.0"
failure_derive = "0.1.0"

use failure::{Error, Fail};
fn main() {
   if let Err(ref e) = run() {
      use ::std::io::Write;
      let stderr = &mut ::std::io::stderr();
      let _ = writeln!(stderr, "Error: {}", e);
      let mut fail: &Fail = e; // ------------- Here's where I get the error
        while let Some(cause) = fail.cause() {
         let _ = writeln!(stderr, "Caused by: {}", cause);
            fail = cause;
fn run() -> Result<(), Error> {
   let (ncells, nports) = (10, 4);
   let blueprint = Blueprint::new(ncells, nports)?;
   return Ok(());

use failure::Error;
#[derive(Debug, Fail)]
pub enum BlueprintError {
    #[fail(display = "Invalid {} {}", ncells, num_border)]
    CellCount { ncells: usize, num_border: usize}
pub struct Blueprint {
   interior_cells: Vec,
   border_cells: Vec,
   edges: Vec,
impl Blueprint {
   pub fn new(ncells: CellNo, ports_per_cell: PortNo) -> Result {
      if *ports_per_cell > *ncells { return Err(BlueprintError::CellCount{ ncells: *ncells, num_border: *nports }) };
      Ok(Blueprint { interior_cells: vec![], border_cells: vec![], edges: vec![] })
1 Like

I’m not sure how you’re getting that error message but note that failure::Error does not implement the Fail trait, so that assignment won’t work. You can get the cause() first, which does return a &Fail and then start iterating from there.

1 Like

That solved the problem. I got confused by your example at The Fail trait -. You indeed say "Assume err is a type that implements Fail;", which is why it works when blueprint returns BlueprintError instead of error.

Many of my methods can return a variety of errors, so I plan to use custom fail types and return Error. If you expect that to be a common case, perhaps the example would be clearer if you started with Error instead of a type that implements Fail. I also noted that the Run button results in a compile error for several of your examples. It really helps me learn how to use a tool if I have complete examples that compile correctly. That way I can delta off of them until I get comfortable with the system.

I would be interested in a comparison of failure vs error-chain also.


Skip to 1:17:10


Hmm, I actually watched it on livestream, but think I will watch it again :slight_smile:

I successfully migrated from error_chain to Failure, in the sense that my program compiles, runs correctly, and prints what I expect when I purposely introduce an error. Once I got past my misunderstanding of the example, it took me about 6 hours to finish the conversion of 40 error_chain errors in 25 modules, but I haven't converted any of what error_chain calls foreign_links or links. The migration included changing Result<()> to Result<(),Error> and ErrorKind::Bar to FooError::Bar throughout. In the process, I generated a gazillion compiler errors that I knocked down one by one, which is why I took me so long. A more experienced Rust programmer would certainly completed the conversion much more quickly. One thing that helped was that I could almost always put what I had in the error_chain display directly into #[fail(display = ...)] for my custom error types.

One disappointment is build time. Since error_chain makes heavy use of macro expansions, it requires a larger recursion limit than the default. I had thought that Failure would build faster, but it doesn't.

I'm now trying to figure out how to use err.cause() and err.context() so I can get the same kind of error report I used to get from chain_err.


I've seen the IntelliJ family of tools automate these kinds of mundane refactorings before. For example the other day I was using CLion and this bit of code was highlighted:

if (vec.size() == 0) {  # CLion highlights and suggests using .empty() instead.

Alt-Enter in the IDE, and apply quick-fix automatically converts to:

if (vec.empty()) {

I bring this up as an example of where the Rust tooling (even outside of the IntelliJ world) could be a great help. I didn't realize how much time features like this could save until I started using them more.

If porting to failure is a common thing that lots of Rust projects do in the future, perhaps we can make a tool to automate the mundane parts?

1 Like

Released failure 0.1.1 with some additional features: Failure 0.1.1 released


I‘m currently building a library that i migrated from stdlib errors to failure. Currently, I‘m returning my own error type and not Error, since it‘s relativly small and it feels like the right thing to do...

What are your criterions to decide between the general, boxed Error and specific error types? Is it a pure size thing or more a „applications use Error, libs specific errors“ heuristic?

(Edit - I realized the problem I was having with Context)

I'm going through the process of converting our codebase from error-chain to failure. Overall it has been going very well, and there's a dramatic reduction in the amount of code relating to error handling.

We have been using error-chain pretty successfully in the code, and the macros help reduce a lot of boilerplate error handling, particularly adding type conversions. But it does mean that every crate has its own error type, even if its only being used to encapsulate and transport errors from elsewhere.

The code is heavily async, and uses a lot of futures-based combinators. One of the most common frustrations with working with that pattern has been keeping track of what error types are where, and converting them to whatever the "ambient" error type for the context is - without the benefit of the ? operator.

The code also uses a fair amount of trait-based genericity, which requires a lot of the traits to have an Error associated type for its implementation to define.

Moving to failure has been a breath of fresh air - by converting everything to a single uniform Error type, we can eliminate the need for error type conversion within chunks of async code, and can simplify all the async traits by eliminating the need for an Error associated type.

Just from a straightforward "make it compile" conversion, I've reduced the number lines by around 40% (1000 lines added, 1600 removed, over 133 files), before starting to remove all the unneeded (now no-op) conversions.

Not everything has been rosy however.

Chaining Errors
Our code makes a lot of use of error-chain's .chain_err() combinator so by the time errors bubble up to the top of the stack to be reported, they have a good causal chain which describes not only why the error occurred, but what was going on at the time.

failure has a built-in notion of a cause which is exactly what we want, but I haven't found a good idiom for using it. It also has the .context() method on both Error and Fail, but that doesn't seem to be the same thing - but I'm not really sure.

I'm basically confused by context and cause, and not sure to what extent they're the same thing. The documentation for context talks about it being suitable for user consumption, but that seems out of place - our code is server code, so the "user" is whoever is digging through the log files, and we always want maximum precise detail there if we're trying to debug something.

(If it were actually a user-facing application though, nothing that's coded into the source would ever be directly presented because of localization, etc - I don't think its appropriate for an error-handling library to try to address UI issues.)

@alexcrichton filed an issue about this, and subsequently closed it, but I'm not sure matter is actually settled.

Edit - I realized that when using .with_context() and then .downcast() to extract errors, I was downcasting to my error type, not Context<MyType>. Fixing that give the behaviour I want.

failure 0.1.1 introduces the bail!() and ensure!() macros, which look similar to error-chain's. Unfortunately it 1) compiles cleanly with existing uses, and 2) does something superficially similar but actually quite different. Specifically, it takes its argument and stringifies it, and then returns it as an error message wrapped in an Error. But in error-chain, it will take its argument, convert it to a suitable error for the context and return it. In other words, if you do bail!(MyError) it will return it, retaining the type info of MyError - whereas failure's bail!() will simply return err_msg(format!("{}", MyError), losing the type info.

I think its a mistake to introduce something like this. It would be better if it were completely incompatible, and simply didn't compile with existing uses so they can be iteratively fixed - either by using a completely different name, or changing the implementation somehow.

I've been using Err(MyError)? as a replacement, and I think it's an overall improvement, so I'd be fine with failure simply not having the bail!() macro.


Error does not implement Fail (and so can't be used as a #[cause])
I understand why - Error has a blanket impl<F: Fail> From<F> for Error, which makes things very convenient. If Error also implemented Fail, then this would result in conflicting implementations of From.

But it also means there's no way to express a generic type for a cause. For example, if I have:

#[derive(Debug, Fail)]
enum MyError {
    #[fail(display = "Conversion of {} failed", _0)]
    Conversion(String, #[cause] ???), // what type?

then there's no one type which can handle a number of different conversion failures. The obvious choice would be Error because its used to wrap up all the other error types. However #[derive(Fail)] requires a #[cause] field to implement Fail - but Error doesn't implement Fail.

If the custom-derive code had type information available to it, then it could implement #[cause] in two different ways - by directly returning a &Fail for Fail-implementing causes, or by calling Error::cause() to get the inner error for Error causes. But my understanding is that it doesn't have type information, so maybe it needs to have #[cause] and #[error_cause] to handle these cases (or something like that).

I've tried to work around this by using Box<Fail> for a cause, but Box<Fail> doesn't implement Fail. I have a hacky local BoxFail type, but it seems like a wart. And it doesn't help with Error unless I have a Fail-implementing wrapper for it. I can get one with .context(""), but that seems like a pretty awful hack.


@withoutboats I'm looking at this, and I think I can see how this improves on existing trait-based error management in Rust.

However my approach to errors has always been enum-based, i.e. something like this:

pub type MyResult<T> = Result<T, MyErr>;

pub enum MyErr {
// all error variants for the module

And then just have error-prone functions and methods return a MyResult, or anything with a From impl for MyErr.

In cases where there is a specific need for a consuming crate/module to be able to define its own error variants, I can definitely see the advantages of trait based error management.
I can also see the advantage when the module author has a strong reason to hide the specific nature of the errors from the consuming code. That said, I prefer to expose the errors as enum variants as it allows consuming code to match on it.
But if neither of those things is necessary, would there still be a compelling reason e.g. for my projects to switch from enum-based error management?

1 Like

Thank you for thinking of us #![no_std] and no heap users. Unlike std::error::Error and even the best plans for error-chain, failure is actually usable!


I would still recommend implementing Fail for your custom error so that other people using your library can integrate it with the whole ecosystem around failure. Also, imagining that some of your variants refer to other error types, it is probably valuable for your users for you to implement the cause method.


I've encountered a minor inconvenience with Failure. Say that I have some function

fn foo(&self) -> Result<(), Error> { ... }

that is called from some other function

fn bar(&self) -> Result<(), Error> { foo() }

If I add a context in the most obvious way,

fn bar(&self) -> Result<(), Error> { foo().context("bar") }

it won't compile because I'm returning failure::Context instead of failure::Error. The fix is obvious

fn bar(&self) -> Result<(), Error> { Ok(foo().context("bar")?) }

but slightly annoying. (Well, the most annoying part is that I can never remember to add the Ok(...) until the compiler tells me I have to, but I'd rather put the blame somewhere else.)

1 Like

Same here.
Also, I think .context() and .with_context() are bad names, as they refer only to the error case, but they appear to append a generic context to the execution, not to the error.

In error chain, it was .chain_err() which is much clearer, IMO.

Maybe .wrap_err() or something like this, to be in line with .map_err()?

1 Like

.err_context(), perhaps?