Why parking_lot::RwLock is not `Send`?

As soon as I added an await in a request handler, it has a compilation error:

18  |         .route("/read/", get(handle_read))
    |                          --- ^^^^^^^^^^^ the trait `Handler<_, _>` is not implemented for fn item `fn(axum::extract::State<MyState>) -> impl Future<Output = Result<(HeaderMap, std::string::String), (StatusCode, std::string::String)>> {handle_read}`
...
26 | #[debug_handler]
   | ^^^^^^^^^^^^^^^^ future returned by `handle_read` is not `Send`

As it's mentioned here

RwLock is Send only if the underlying value is Send (and same for Sync). Arc is Sync if the underlying value is Send.

I wonder why isn't my struct Send? I tried to use parking_lot to work around that std::sync::RwLock is not Send!

use axum::{debug_handler, extract::State, http::{HeaderMap, StatusCode}, routing::get, Router};
use parking_lot::RwLock;
use std::{sync::Arc, time::Duration};
use tokio::time::sleep;

struct Item(u32);

#[derive(Clone)]
pub struct MyState { data: Arc<RwLock<Item>> }

#[tokio::main]
async fn main() {
	let app_state = MyState { data: Arc::new(RwLock::new(Item(123))) };
	
    let app = Router::new()
		.route("/read/", get(handle_read))
		.with_state(app_state);

	let addr = "0.0.0.0:8833";
	let listener = tokio::net::TcpListener::bind(addr).await.unwrap();
	axum::serve(listener, app).await.unwrap();
}

#[debug_handler]
async fn handle_read(State(ast): State<MyState>) -> Result<(HeaderMap, String), (StatusCode, String)> {
	//let mut t = Timer::new();
	let data = ast.data.read();
	sleep(Duration::from_secs(3)).await;
	drop(data);
	Ok((HeaderMap::new(), serde_json::to_string("read request done").unwrap()))
}

Your type isn't the problem here, keeping the RwLockReadGuard—which doesn't implement Send—that you create here:

across the await point here:

is the problem.

If you need to keep guards across await points you should use a lock that supports that, like tokio::sync::RwLock, whose guards implement Send.

3 Likes

The parking_lot guards can be made Send by enabling the send_guard feature. The arc_lock feature might also be of interest, as it provides 'static guards for locks that are stored inside Arcs (via read_arc() and friends).

2 Likes

Note that even if you manage to make the guard Send, you likely don't want to hold it across an .await point since that can deadlock the async runtime.

2 Likes

Oh, indeed, it's not RwLock there.

It looks like in Rust, just like in Python, async is another language -- basically, there's an async double of every stdlib item.

My intention was just an innocent test of slow responses. And to check for deadlocks with read/write locks. :slight_smile:

Now I'm thinking: how should I architect my server?

There's a big data structure, and there are 1) read handlers, for requests (with some calculations) and 2) edit handlers, that modify this big structure. Read handlers need only read access, but some of them are fast, and one-two are slow to calculate. Write request should be rather slow.

As I see with async, if there's no yield_now() or sleep(..).await, the server is working simply in a blocking way.

Otherwise, the test is working as intended.

Had to write a 150 LOC, 5Kb mockup binary to test it.

41:50.102: server thread: starting
41:50.103: test
41:50.104: running the server at 0.0.0.0:8833
41:51.102: read thread 1: making request
41:51.168: read view 1: got request, getting read lock
41:51.168: read view 1: got lock, reading data
41:52.102: update thread: making request
41:52.138: update view: got request. locking structs
41:53.102: read thread 2: making request
41:53.141: read view 2: got request, getting read lock
41:54.169: read view 1: ref table RefinedTable { raw: RwLock { data: <locked> }, refined: [RefinedItem { raw: RawItem(123), local_data: 123 }] }
41:54.169: read view 1: reading done, unlocking
41:54.169: read view 1: unlock done, writing response
41:54.169: read view 1: replying
41:54.169: update view: structs locked, updating...
41:54.171: read thread 1: got response: "\"read request done\""
41:57.170: update view: update done, unlocking
41:57.170: update view: unlocking done, creating response
41:57.170: update view: replying
41:57.170: read view 2: got lock, reading data
41:57.173: update thread: got response: "\"update done\""
42:00.171: read view 2: ref table RefinedTable { raw: RwLock { data: RawTable { data: [RawItem(123), RawItem(3)] } }, refined: [RefinedItem { raw: RawItem(123), local_data: 123 }, RefinedItem { raw: RawItem(3), local_data: 123 }] }
42:00.171: read view 2: reading done, unlocking
42:00.171: read view 2: unlock done, writing response
42:00.172: read view 2: replying
42:00.177: read thread 2: got response: "\"read request done\""

The key with async is using spawn, or something like a select! to race multiple operations. There is some complexity with cancellation (tasks started in the select, but not completed, will be dropped)

1 Like