With an Erlang 'process', there is a guarantee of the form: after every X reductions, the process gets swapped out. In a sense, there is a 'fairness / progress' guarantee of the form: as long as your 'process' does not block on others, you are guaranteed X CPU cycles every second (where X = some_constant / total_number_of_processes).
With Rust async io tasks, there is not this guarantee.
====
In particular, consider a machine with 128 Cores and 12800 Erlang Processes. As long as the 'process' does not block, we are guaranteed that it gets ~ 1/100th of a CPU every time interval. This guarantees that each 'process' is 'making progress.'
Consider the same problem where we have 128 cores and 12800 async tasks. We don't really have any guarantees of any form right? I.e. 129 loop {}'s (or less obvious versions of loop {}) can basically starve everything else ?
Is this tradeoff unavoidable in the sense of: if we want to be able to interrupt after X instrs, we need to either execute bytecode (or do a branch-check after every X instrs); if we want high performance x86_64 execution, then async tasks can own CPUs and starve other tasks ?
There are some things you can do to mitigate that, but those come with their own issues -- the .net threadpool will start more OS threads if none of its existing ones come back to the scheduler, for example, but that just introduces a different failure mode of "now there's 32000 threadpool threads taking up all the resources".
We do indeed not have these guarantees, and it is one of the reasons I wrote my article on blocking the thread. Fundamentally the problem is that if you call a non-async function from async code, then there's no way to yield while that call is running. You would have to change how all Rust code is compiled if you want to support that, and even that wouldn't work when people use C ffi to call out to other languages.