Cleaning up in another thread

I've built a small web service in Rust but there are still some rough edges. In particular, some endpoints can fail (not a panic!, just a normal failure from talking to other services which is signaled in my Rocket app using a custom Responder). When this happens, the database is left in an inconsistent state. I'd like to roll back transactions in all outstanding database connections. I'm stuck on how to make that ergonomic. I can't have the connections do cleanup on Drop because I need to know what the response is before choosing to commit or roll back. I also don't like the Responder type having explicit access to all connections; that seems like it requires the programmer to be careful to keep track of their connections themselves.

The idea that I know which connections are outstanding suggests I have a "registry" of outstanding connections. In Rocket, the most obvious way to keep that kind of registry is using local_cache, which requires Send + Sync on the cached value (i.e. the registry), which I guess suggests that the value is sent to another thread. I understand that there are obvious safety issues when sharing values between threads (implicit in the Send and Sync traits, but it seems to me that there's a clear "ownership" here -- the endpoint owns the connection until it drops it, and then ownership belongs to the "cleanup" thread. So I'm trying to put that rationale into practice. I have successfully defined an OutstandingConnections (wrapper around Vec) that is stored in the Request state using local_cache, but I'm struggling to produce connections that my endpoint can use that are also accumulated in that Vec later.

Things I've tried so far:

  • Store each connection in an Rc or Arc and, every time I'm about to hand out a database connection, clone the pointer and put it in the OutstandingConnections. This doesn't work because the connections aren't Sync.
  • Store each connection in a Mutex inside an Arc, and every time I'm about to hand out a database connection, clone the Arc and put it in OutstandingConnections. Unfortunately this means that the endpoint has to handle locking the Mutex itself on every use, so I didn't really even try this.. Besides, it feels like I should be able to ensure that the mutex is available exclusively to the endpoint until it drops the handle.
  • As above, but lock the Mutex and return the MutexGuard. I ran into a bunch of lifetime issues with this. Naïvely I wanted to wrap the Mutex in an Arc, add it to the OutstandingConnections, lock it, and return the output from lock(). I guess this didn't work because the compiler can't guarantee that OutstandingConnections won't go out of scope in the meantime.
  • As above but try to keep the Mutex, or an Arc around the mutex, alongside the MutexGuard. More lifetime troubles of a different kind. I eventually found posts like Keep ref to MutexGuard in struct, Integrate Mutex and MutexGuard into a struct which seem to imply that this is a dead-end. One post suggests that I could use RawMutex from parking_lot but that didn't seem ergonomic to me either and I was hesitant to introduce another dependency just yet.
  • Some of these posts suggest that a better approach would be to accept a function that uses the locked value and does cleanup afterwards, but that isn't really feasible with Rocket's endpoint invocation machinery -- you have to return the thing you want the endpoint to have as an argument.
  • Another idea I had was to try to have my wrapper around the connection have a custom Drop that, when it happened, transferred ownership to the OutstandingConnections. But transferring ownership of a field from drop is prevented by the borrow checker (there's still &mut self outstanding), and I don't have a "dummy" connection I could swap in. I could wrap the connection with an Option, but again, that means all uses have to handle the case where the connection isn't there.
  • I found Moving out of a type implementing Drop, which suggests ManuallyDrop, but that means unsafe and I wanted to see if there were other ideas.

I'd also love to know whether this is an unusual pattern, or if there are any architectural lessons I should be taking away from this. I'm still shaky on ownership and lifetimes generally. Thanks!

This feels like the most promising approach to me. Since the connection will exist until drop starts executing, it seems quite reasonable to unwrap the Option everywhere else you need it-- the only way it can fail is if code attempts to use the connection after it's been dropped, which indicates a more serious problem. In practice, I'd probably implement Deref(Mut) for the wrapper along these lines:

impl Deref for ConnWrapper {
    type Target = Connection;
    fn deref(&self)->&Connection {
        self.conn.as_ref().expect("Connection accessed after drop!")
    }
}

This worked great, thanks! I didn't think about doing the .expect() in Deref -- it's funny, it almost feels like circumventing the type system. Thanks also for mentioning as_ref which I might not have found on my own.

It is, in a way— You have an invariant that the type system is unaware of, and you’re using that invariant to reason about the correctness of the code. It’s similar to the reasoning you need when writing unsafe code, but the penalty of a mistake here is a clean panic instead of unpredictable behavior. As those unenforced invariants get more complicated, the program will get harder to modify directly.

I’d be tempted to extract this drop behavior as a generic wrapper type so that the rest of the connection-handling code doesn’t need to worry about it, but there’s not a single, obvious design. One option would be to hold both an Option<T> and a channel Sender<T>, and transmit the contained value over the channel on drop. The receive side of the channel can then do whatever is necessary to clean up the value.

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.