It’s quite the opposite, in my view. If you didn’t have any wakers, you’d be doing simple polling, not fully efficient asynchronous computations. The latter does generally involve some sort of call-back or waking mechanism to avoid the overhead of “lots and lots of polling all the time” (or the alternative issue of “it takes a while until polling happens and progess is noticed”).
The power of async is that logical threads (aka “tasks”) become data simple structures, and you can support a lot more tasks much cheaper than system threads would. You can have a single thing (e.g. a fixed thread-pool; or even things the OS handles for you) that handles all usages of your API in one place / in a manner most fitting to the specific problem being solved or functionality being offered. It is nonetheless still the case that something has to run in parallel somewhere, and there needs to be some representation of the tasks.
Wakers are the interface to achieve this. You can give your Future some index/identity, and then make a store that keeps track of its latest Waker for each Future. That’s effectively the data-structure representation of the logical threads then.
Now if that’s the whole interface, than that is a simple polling interface, and it’s not ideal for being wrapped in a Rust Future as it doesn’t fit that model. Given you are planning to write the C interface yourself, you should probably reconsider the design. Edit: nevermind, I misread (see below). Still, I’m saying if that really is the whole API for the C library in question; i.e. a purely polling-based API where you need to do polling separately for every single request in question (without any way to bundle them up or something), then a better C API As far as I can tell, it would also improve the situation for all other users of that library API, be it users from C, or from other languages or frameworks – synchronous or asynchronous – as polling APIs would be annoying to work with either way.
Of course, there are cases where polling can be acceptable. E.g. in contexts where you know things aren’t expected to react quickly, so a low-frequency polling (anything that doesn’t come remotely close to showing as any perceivable percantage of CPU usage load for the polling loop alone — for instance, in a rendering loop, doing one poll for each frame to some functionality shouldn’t be problematic) and where the number of API callers that want to do this polling in parallel aren’t too high.
Perhaps this also means – however – that it’s not any use case where usage of async Rust is actually necessary (limited number of callers can probably work fine with system threads)?
Feel free to share more details about the nature of functionality you’re implementing, and the motivation for using async Rust; there may well be legitimate reasons, and determining what’s the cleanest approach or workaround probably depends heavily on the specifics of your situation 