Best practices for spawning tasks while staying executor-agnostic

I'm currently working on converting a sync project to async. One pattern I'm coming across a lot is where the library spawns threads to allow an mpsc receiver to run concurrently with the rest of the work being done. I'm trying to figure out the best way to accomplish this while still staying executor-agnostic. Here are the options i could think of, I'd love some input:

  • I could just expose some new futures that do the same thing that the user could spawn on their own executor, but that would end up with a lot of new API and would be error-prone as there would be nothing preventing the user from forgetting to spawn those tasks.
  • I could accept an object that implements futures::task::Spawn and use that internally to spawn the tasks instead of spawning a new thread
  • I could use conditional compilation that targets the 3 major executors and the user would just have to compile with the correct feature on

Am I missing any options? Is there a "best" way to do this? Thanks for all the help

Is the spawned task CPU-bound? If it isn't, you don't need to spawn it. You can instead .await it together with the rest of the work being done.

If it is CPU bound, then personally I might be picky about which threadpool it will be running on (for example, I have servers with 96 cores, and I don't want every little library try to run itself on all 96 cores). So I'd prefer to spawn it myself, or provide you a specific runtime/handle/callback for spawning it.

The only other option is to use FuturesUnordered so that you don't need to actually spawn anything. Besides those four, I'm not aware of any others.

in the original non-async code it listened to a socket in an infinite loop and communicated with other threads over an mpsc channel. I assumed I'd have to put this kind of thing in a separate task, is there a better way to model this with futures?

Yes. Listening sockets can be a Stream of connections, and you can model processing of these connections as a map over the stream elements. The consumer of the stream can decide when and where it's processed, and use methods like .buffer_unordered() to process in parallel.

There's even a neat library that can make it all look like a loop:

Oh, but TCP streams themselves are runtime-specific. If you spawn a thread + mpsc, you'll be reinventing an async runtime with sync code :frowning: