This is a design question I am myself always struggling with. First, I don't think that API and implementation need to match -- you can have a blocking thread-pool based implementation that exposes async API. Similarly, you can have a async io_uring/iocp based implementation, that exposes blocking API.
From what I know about file systems, the blocking implementation would make more sense.
The question of API is interesting. If your main (UI) thread is already async
, than it makes sense to provide an async
API.
If it is not async, you need some kind of blocking, evented, selectable API. That, is, you should be able to do the old PHP trick (edited the link)
// Schedule two reads in parallel, do not block
let foo = vfs.read("/foo.rs");
let bar = vfs.read("/bar.rs");
// `.contents` is a blocking call that returns `&String`
let foobar() = foo.contents().clone() + bar.contents();
And you'd also want to select between two blocking calls (and cancel them as well)
let foo = vfs.read("/foo.rs");
let bar = vfs.read("/bar.rs");
// Some made-up syntax
select! {
foo.contents() => ...,
bar.contents() => ...,
}
The above APIs are blocking, but still allow for concurrency tricks associated with async.
The catch is, I don't know what is the vocabulary type of choice for expressing this concurrency. One choice is using the Future
trait, but then sync callers would have to call block_on
everywhere, which gets verbose. Another choice is channels, but the existing channels APIs feel too general: oneshot computations are awkward, there's no map/filter/etc, select is macro driven. For rust-analyzer, I went with channels, which worked ok, but only because the sink is an event loop anyway. I am not sure if that is an accident, or if all concurrency is just better expressed as a game loop.