Strange compiler error (bug?) - axum handler

In trying to migrate my actix-web project to axum and I'm getting a "trait bound not satisfied" error compiling an axum handler that is very confusion. It almost seems like a compiler problem.

I reproduced it with minimal additions to the axum "hello-world" example. The very strange thing is that (for this example) it goes away if I simply move an assert down two lines. The problem is, I'm getting the same error in other cases, not so easily fixed.

Here's my Cargo.toml

[package]
name = "example-hello-world"
version = "0.1.0"
edition = "2018"
publish = false

[dependencies]
axum = "0.4.5"
tokio = { version = "1.0", features = ["full"] }

Here's the code that will not compile, and if you look at the handler, and comment out the assert, and uncomment the assert two lines above it, it compiles fine.

use axum::{routing::get, Router, extract::Extension,
           AddExtensionLayer};
use std::net::SocketAddr;
use std::sync::{Arc, Mutex};

#[derive(Debug, Clone)]
struct Config {
    msg: String,
}
struct AppState {
    cfg: Mutex<Config>,
}

#[tokio::main]
async fn main() {
    // Initialize Config data
    let cfg = Config {
        msg: "cruel".to_string(),
    };

    let cfg_data = Arc::new(AppState{
            cfg: Mutex::new(cfg.clone()),
    });

    // build our application with a route
    let app = Router::new().route("/", get(handler))
                           .layer(AddExtensionLayer::new(cfg_data));

    // run it
    let addr = SocketAddr::from(([127, 0, 0, 1], 3000));
    println!("listening on {}", addr);
    axum::Server::bind(&addr)
        .serve(app.into_make_service())
        .await
        .unwrap();
}

async fn handler(
        Extension(data): Extension<Arc<AppState>>,
    ) -> String {
    // This assert compiles.
    // assert_eq!(tokio::spawn(async { 1 }).await.unwrap(), 1);

    let cfg = data.cfg.lock().unwrap();

    // This assert does not!
    assert_eq!(tokio::spawn(async { 1 }).await.unwrap(), 1);

    format!("Hello {} World!", cfg.msg)
}

Here's the compiler error:

 Compiling example-hello-world v0.1.0 (/home/chris/projects/rust/axum/examples/hello-world)
error[E0277]: the trait bound `fn(Extension<Arc<AppState>>) -> impl Future<Output = String>          {handler}: Handler<_, _>` is not satisfied
   --> examples/hello-world/src/main.rs:26:44
    |
26  |     let app = Router::new().route("/", get(handler))
    |                                        --- ^^^^^^^ the trait `Handler<_, _>` is not                implemented for `fn(Extension<Arc<AppState>>) -> impl Future<Output = String> {handler}`
    |                                        |
    |                                        required by a bound introduced by this call
    |
note: required by a bound in `axum::routing::get`
   --> /home/chris/.cargo/registry/src/github.com-1ecc6299db9ec823/axum-0.4.5/src/routing/              method_routing.rs:393:1
    |
393 | top_level_handler_fn!(get, GET);
    | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ required by this bound in `axum::routing::get`
    = note: this error originates in the macro `top_level_handler_fn` (in Nightly builds, run with -     Z macro-backtrace for more info)

For more information about this error, try `rustc --explain E0277`.
error: could not compile `example-hello-world` due to previous error

It seems like it's something about the shared state that is being passed in as that seems to be the common element here.

Any advice or insights on this would be greatly appreciated. I've tinkered around with the return values but it all seems quite mystifying to me.

Try out this one:

1 Like

Thank you so much!

Using that I was able to figure out the problem.

48 |         assert_eq!(tokio::spawn(async { 1 }).await.unwrap(), 1);
   |                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ await occurs here, with `cfg` maybe used later

So it was because cfg was in scope for the second assert. If I limited the lifetime of cfg I was able to put the assert in later.

For example this compiles:

sync fn handler(
        Extension(data): Extension<Arc<AppState>>,
    ) -> String {

    let result = {
        let cfg = data.cfg.lock().unwrap();
        format!("Hello {} World!", cfg.msg)
    };

    assert_eq!(tokio::spawn(async { 1 }).await.unwrap(), 1);

    result
}

To think they put together a special macro for debugging axum. I wish I knew how to even look for this!

You are quite awesome!

1 Like

I am trying to understand these errors better.

Here's a example of an async handler, where I am getting additional explanations after adding debug_hander.

use axum_macros::debug_handler;
#[debug_handler]
pub async fn get_logs_for_job(
        Extension(data): Extension<Arc<AppState>>,
        Path(job_id): Path<i32>,
    ) -> String {
    let cfg = data.cfg.lock().unwrap();
    cfg.log(Info, "normal",
        format!(r#"get_logs_for_job:\n\
                 job_id={:?}"#, job_id));
    let client =
        match connect_db(&cfg).await {
            Ok(clnt) => clnt,
            Err(_) => return "no connect".to_string(),
        };

    match fetch_log_rows(&client, Some(job_id), None, None, &cfg).await {
        Ok(rows) => rows,
        Err(_) => "db fetch failed".to_string()
    }
}


And here's the error:

  Compiling rshot v0.4.0 (/home/chris/projects/shotdb/Shared/app/rshot)
error[E0277]: the trait bound `fn(Extension<Arc<AppState>>, axum::extract::Path<i32>) -> impl Future<Output = std::string::String> {handlers::           get_logs_for_job}: Handler<_, _>` is not satisfied
   --> src/main.rs:157:42
    |
157 | .route("/jobs/:jobid/log-responses", get(get_logs_for_job))
    |                                      --- ^^^^^^^^^^^^^^^^ the trait `Handler<_, _>` is not implemented for `fn(Extension<Arc<AppState>>, axum::        extract::Path<i32>) -> impl Future<Output = std::string::String> {handlers::get_logs_for_job}`
    |                                      |
    |                                      required by a bound introduced by this call
    |
note: required by a bound in `axum::routing::get`
   --> /home/chris/.cargo/registry/src/github.com-1ecc6299db9ec823/axum-0.4.5/src/routing/method_routing.rs:393:1
    |
393 | top_level_handler_fn!(get, GET);
    | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ required by this bound in `axum::routing::get`
    = note: this error originates in the macro `top_level_handler_fn` (in Nightly builds, run with -Z macro-backtrace for more info)

error: future cannot be sent between threads safely
  --> src/rest/handlers.rs:62:1
   |
62 | pub async fn get_logs_for_job(
   | ^^^ future returned by `get_logs_for_job` is not `Send`
   |
   = help: within `impl Future<Output = std::string::String>`, the trait `Send` is not implemented for `std::sync::MutexGuard<'_, cfg::Config>`
note: future is not `Send` as this value is used across an await
  --> src/rest/handlers.rs:81:11
   |
71 |     let cfg = data.cfg.lock().unwrap();
   |         --- has type `std::sync::MutexGuard<'_, cfg::Config>` which is not `Send`
...
81 |     match fetch_log_rows(&client, Some(job_id), None, None, &cfg).await {
   |           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ await occurs here, with `cfg` maybe used later
...
85 | }
   | - `cfg` is later dropped here
note: required by a bound in `__axum_macros_check_get_logs_for_job_future::check`
  --> src/rest/handlers.rs:62:1
   |
62 | pub async fn get_logs_for_job(
   | ^^^ required by this bound in `__axum_macros_check_get_logs_for_job_future::check`

For more information about this error, try `rustc --explain E0277`.
error: could not compile `rshot` due to 2 previous errors

My understanding is that the await on the call to fetch_log_rows might block, and this will cause the tokio runtime to suspend execution and perform some type of polling and resume after the block ends. The error above complains that the cfg object will be dropped at the end of the handler. My confusion is, why does that matter? If the runtime will pick up at the point where the await is called and the await is called before "fetch_log_rows" ends, won't cfg still be in tact, since the resume point is prior to the function end?

Please edit the post to include the full error as reported by cargo build.

Done. Thank you!

So I was able to fix the compiler errors by taking a lot more care to minimize the lifetime of the cfg variable I was getting the complaints about.

I still do not understand why this was necessary for the reasons outlined in my previous post.

It was not only inside the handler that I needed to do this, but the other async functions that I was calling from the handler as well. The debug_handler being left on the handler itself was able to flag the problems of this type, and as long as I focused on that and ignored the trait bounds errors, this all that was necessary.

Here's the new handler that works by scoping the usage of cfg as well as modifying the additional helper functions to accept a mutex from the original AppState being passed in to the handler.

For brevity I'm not including those helper functions. In short, I'm no longer passing around the cfg value, but instead a mutex for which the mutex_guard I took care was dropped prior to each call.

use axum_macros::debug_handler;
#[debug_handler]
pub async fn get_logs_for_job(
    // orig actix_web args
    // data: web::Data<AppState>,
    // web::Path(job_id): web::Path<i32>
    //
        // axum args
        Extension(data): Extension<Arc<AppState>>,
        Path(job_id): Path<i32>,
    ) -> String {

    {
        let cfg = data.cfg.lock().unwrap();
        cfg.log(Info, "normal",
            format!(r#"get_logs_for_job:\n\
                     job_id={:?}"#, job_id));
    };

    let client =
        match connect_db_w_mutex(&data.cfg).await {
            Ok(clnt) => clnt,
            Err(_) => return "no connect".to_string(),
        };

    match fetch_log_rows(&client, Some(job_id), None, None, &data.cfg).await {
        Ok(rows) => rows,
        Err(_) => "db fetch failed".to_string()
    }
}


I'm working on an article related to this. You can read my current draft here: Shared mutable state in Rust – Alice Ryhl

3 Likes

Very cool. This is definitely a topic that needs clarity "out there", thank you for sharing. --Chris Kaltwasser

P.s. I'm already digging your article on actors. Because I finally got over this last compiler error, my whole interest for switching to axum over actix has been renewed-- namely that was that I was unable to spawn tokio tasks from actix-web. While facing this difficulty with axum (you helped get over) I had been curious about going back to actix-web and using their actors for this.

Basically I just want to spawn off an independent tasks / processes / threads from an async web handler so I can provide a rest API that has the ability to control and monitor them.

I enjoyed reading your article. I got a little bit lost and it would be really helpful if I could download an example with a fn main() that puts it all together into something that can compile, start the actor, talk to it and shut it down.

I'm pretty sure I have a good use for this pattern and I would like very much if I could build the actor myself. Perhaps that's my biggest problem with my understanding your tutorial, it might assume that I know more about what an actor is and how it is used. A hello-world actor would be helpful to me.

I used actix-web, but I have not used actix on it's own, and perhaps that's my biggest problem to getting the gist. I'm thinking of playing around with this crate ( tiny tokio actor ) to perhaps get a better handle on it all--- no pun intended!

Now that I have ported my application to axum, I still want to launch a separate task that needs to perform basic socket I/O to one ore more remote 3D camera servers. I need to be able to start, pause, resume and stop a recording. When a rest method comes into axum asking to start a recording, I would like to have a "dedicated" actor startup, send a message to the camera to begin recording, and as data comes in from a socket, write it to a database. I would like to keep the actor running all this time (reading from the camera and writing to the database) while other axum rest methods are busily reading the data that has been coming in from live recordings or perhaps controlling other camera recording actors.

This sounds like a perfect application for the actor pattern. The actor should be alive from the point in time where the camera starts until it is stopped, and suspended when paused and resumed when resumed.

I mean, if you want an example of how to use the actor from my blog post, here you go:

#[tokio::main]
async fn main() {
    let handle = MyActorHandle::new();
    println!("{}", handle.get_unique_id().await);
    println!("{}", handle.get_unique_id().await);
    println!("{}", handle.get_unique_id().await);
    println!("{}", handle.get_unique_id().await);
}
1
2
3
4

playground

That sums it up! Thanks. D'oh.

Sometimes the most obvious things are not to me! The story of my life.

Your "Shared mutable state" article just helped me understand a problem I was having. In previously written ( non async ) code I had implemented a trait that shared a mutable TcpStream that I was using so that I could use a secondary thread to call TcpStream::shutdown. I was having a hard time getting it to compile as async code. For two reasons. One obvious one was is there does not appear to be a tokio::net::TcpStream::shutdown function.

I put that one aside and tried to compile the rest of the code and I kept bumping into the rule you pointed out about not locking a mutex across an await boundary.

So it seems quite obvious that you cannot use any of the async calls that operate using a locked tokio TcpStream.

The morale of the story you could add to your article, is that you should never attempt to implement a type that looks like this:

pub type SharedConnection = Arc<RwLock<tokio::net::TcpStream>>;

(Now I understand why there is no TcpStream shutdown method in tokio. Because you would never be able to call it from another thread or task!)

The reason, is that you'll never be able to use any of the async functions with a locked mutex!

Actually, Tokio does support shutdown. It is here.

Otherwise, you are correct. I usually recommend the actor pattern for shared connections.

Thanks for the reply.

(fwiw/ I forgot to mention that I had noticed that, but it's not really the same thing since it's only written for the write side. My case is mainly about the read side, so I couldn't make use of it.)

So I was expecting that to force the task to die was my only option. Do you happen know if doing that will cause a shutdown of the TcpStream, or should a hook be written for that? ( I need to read up on how to write hooks for when tasks need to be killed, how to do a graceful shutdown. I see your article about actors gets into some of that.)

Dropping the TcpStream value will cause the stream to forcefully close in both directions, but it's important to keep in mind that the graceful way to close a TCP stream is to have each end of the connection close it in their write direction. The shutdown(Read) method in std has somewhat weird behavior, and is not consistent across platforms.

That's a great thing to know and to keep in mind. ( Since I often don't have control of both sides of a network app and their implementation, and I could easily see myself hoping to point the blame off of my plate with that statement, is that the sort of thing documented well in some RFC somewhere, or has that sort of just bubbled up over the years? )

Another thing worth noting, is that I'm not working with bi-directional sockets with this application. The server in this case has two sockets, one for read and one for write.

I don't know where such a thing would be documented.

Network I/O protocols present a rough terrain. I was just trying to read up on the meaning of bidirectional versus unidirectional tcp/ip sockets, and came across some chatter stating that by default all tcp/ip sockets are bidirectional and that in order to implement a read only socket you need to perform a shutdown of the write after opening and conversely a write only socket would require a shutdown of the read side.

I was trying to figure out how to create accessor and mutators for single elements of a shared struct so, being inspired by your great article, I experimented with the "with_*" pattern you so kindly included, and came up with this example code: (The goal demonstrated in the fn main() is to access and mutate the two elements name and age inside ConfigInner::cf : struct ConfigFile ).

use std::sync::{Arc, Mutex};

#[derive(Clone)]
struct Config {
    inner: Arc<Mutex<ConfigInner>>,
}

#[derive(Clone)]
struct ConfigFile {
    name: String,
    age: i32,
}

struct ConfigInner {
    cf: ConfigFile,
}

impl Config {
    fn new(name: &str, age: i32) -> Self {
        Self {
            inner: Arc::new(Mutex::new(ConfigInner {
                cf: ConfigFile {
                    name: String::from(name), age
                }
            }))
        }
    }

    fn get_cf(&self) -> ConfigFile {
        let lock = self.inner.lock().unwrap();
        lock.cf.clone()
    }

    pub fn with_cf<F, T>(&self, func: F) -> T
    where
       F: FnOnce(&mut ConfigFile) -> T
    {
        let mut lock = self.inner.lock().unwrap();
        func(&mut lock.cf)
    }
}

fn main() {
    let cfg = Config::new("Theo", 7);

    // use get_cf() to access cf.name  (less efficient due to cf.clone())
    println!("name = {}", cfg.get_cf().name);

    // use get_cf() to access cf.age (less efficient due to cf.clone())
    println!("age = {}", cfg.get_cf().age);

    // use with_cf() to mutate cf.age
    cfg.with_cf( | cf | {
                cf.age += 2;
            });

    // use get_cf() to access name and age (again)
    println!("name = {}", cfg.get_cf().name);
    println!("age = {}", cfg.get_cf().age);

    // use with_cf() to access name and age 
    let name = cfg.with_cf( |cf| { cf.name.to_owned() });
    let age = cfg.with_cf( |cf| { cf.age });

    println!("name = {}", name);
    println!("age = {}", age);
}

Here, fn with_cf is being used both as a way to mutate and access cf.name and cf.age. Note how fn get_cf used as an accessor is less efficient because it requires a clone of the entire cf struct even when it is used to access only a single element as in cfg.get_cf().name.

Is this an advisable usage of the with_* pattern, or is there a better way to do this?

Thank you!