Need a second opinion on "async Rust" <-> "C library with callbacks" interop

Hello, wise people of Rust!

What's the problem

I have a convoluted use case of async Rust co-operating with a C library with callbacks.

The regular flow of the library, as used from C

  1. C code calls library (say setup_device()).
  2. The library does its magic, but to perform some actions calls callbacks (say, a bunch of read_register(), write_register() events).
  3. Callbacks perform some network send()/recv() (yep, I know, read register via network. Nobody claims people at Microchip are sane).
  4. Far side performs an action and responds (or a packet is lost)
  5. Upon response or timeout callback returns outcome to the library.
  6. Library performs many more turnarounds and finally passes the final result to the initial caller.

What I have:

  • C library is operational (part of another project)
  • It properly links to Rust code (thanks to cc crate)
  • I can call the necessary functions (a hefty unsafe extern "C" { ... }) block.
  • Callbacks are handled by stubs in Rust (they now just log args and return failure), a few #[unsafe(no_mangle)] pub extern "C" fn XXX(){ ... }

  • I have a reactor loop and all the necessary network code.

My idea

Now it is time to stitch it together.

  • Have main reactor with loop { select! { ... }}.
  • Have a separate blocking task async fn lib_requestor_loop(){}
    • connected with main reactor via async mpsc queue lib_queue of enum Cmd{},

    • each Cmd has a one-shot channel lib_reply for reply from the library.

    • Dispatched Cmd calls blocking the library

      • lib callbacks issue enum NetReq{} via async mpsc queue net_rquests to the main reactor (try_send is sync, no problem), each NetReq has a field with a sync channel net_reply for enum NetResult{}-s (i don't see a sync one-shot implementation anywhere, Tokio's sync one_shot::try_recv() doesn't block). Callback blocks on recv on net_reply.

        • The main reactor processes the request and reports success/failure back via net_reply.
      • Recv on net_reply unblocks, callback passes the result back to library

      • Blocking call to the library returns

    • The result is sent back via lib_reply channel to the main reactor

  • Profit!

Do you have any improvements or suggestions for my flow? My main concern is callback-main reactor communication, especially the slippery situation of async reactor and sync callbacks.

Thanks in advance.

This is not a problem per se. The issues begin to arise when the synchronous code begins to take disproportionately significant amount of time compared to the rest of your asynchronous one. @alice posted a fairly comprehensive article about this some time ago. Given the IO-bound (?) nature of your callbacks into the C library, it looks like the blocking thread pool should suffice.

The Tokio runtime includes a separate thread pool specifically for running blocking functions, and you can spawn tasks on it using spawn_blocking. This thread pool has an upper limit of around 500 threads, so you can spawn quite a lot of blocking operations on this thread pool.

If you are planning on extending your program to block for a fairly long time however, spawn a new dedicated thread. tokio itself recommends doing so when waiting on stdin - for instance.

std::sync::mspc::sync_channel(0) might do the trick for you there.

Note that a buffer size of 0 is valid, in which case this becomes “rendezvous channel” where each send will not return until a recv is paired with it.

1 Like

Thank you @00100011 for your insight.

[...] however, spawn a new dedicated thread [...]

Yes. Upon writing the actual code I came to the same conclusion. My lib_requestor_loop() is now a normal thread, not a task.

[...] i don't see a sync one-shot implementation [...]
std::sync::mspc::sync_channel(0) might do the trick for you there.

As there's just a single instance of lib and all calls are serialized (no possibility of interleaving), sync_channel(0) is a viable solution. It can even be persistent, as it is effectively used in SPSC context.