I think it is important to point out that Ron Pressler's scoped continuations are actually not the cutting-edge research that he claims there are. They are just a new variation of the old concept of user-mode threads aka "green threading" aka "stackful coroutines", which has been with us for a long while. Go's goroutines are one modern and popular implementation.
That Pressler is not keen on pointing out this lineage is to be expected, as he is very eager to point out that he is critical of programming language innovation, a psychological bias which will naturally lead him to ignore said innovation altogether. Here is just one example of him ranting about it among many others:
My academic interests do not include programming languages, which I view as the sub-discipline of computer science that has consistently over-promised and under-delivered more than any other (with the possible exception of AI). I am more interested in algorithms than abstractions, and programming language research is mostly concerned with the latter.
Now that I got this out of the way, let's phrase the problem in a fashion which I think relates better to your underlying point: how do asynchronous programming models compare to one another? First of all, we need a reasonable list of popular asynchronous programming models. I think this blog post from an author of Twisted Python, which is in general a good read, does a very good job at enumerating them:
- Futures + Callbacks
- Explicit coroutines (where suspension points stand out)
- Implicit coroutines (aka "green threads").
Rust currently has 2, and wants to move towards 3 for the benefit of a more natural syntax. Ron Pressler is essentially advocating for a more user-visible variant of 4 in the context of Java.
Now, how do these explicit and implicit coroutines differ? In my view, there are two key differences.
The first one is that explicit coroutines make it easier to reason about asynchronous codes, because suspension points are made clearly visible via "yield" or "await". On their side, implicit coroutines make it easier to integrate asynchronism into existing codebases, as suspension points do not need to be explicitly annotated, but the price to pay is that the resulting code is less comprehensible. There is a strong analogy to be made with the difference between implicit exception propagation and explicit error propagation via checked exceptions or monadic errors, I will expand upon it a bit later on.
The second difference is that from an implementation point of view, explicit coroutines easily translate into state machines (e.g. futures + combinators), whereas implicit coroutines essentially force programming language runtimes to replicate the job of an operating system's thread management facilities in user mode.
It is easy to brush off the second point as something for compiler authors to worry about and users to ignore, but it actually has far-reaching implications. First of all, as implicit coroutines are much closer to OS threads, the performance benefit of using them is much less clear-cut. You still need to reserve sizeable chunks of RAM for stacks for example. Second, since a programming language runtime cannot do all the things which an OS task manager can (e.g. grow a stack efficiently without invalidating pointer adresses, and catch all blocking syscalls), implicit coroutines are extremely difficult to implement correctly and efficiently in a programming language like Rust which exposes low-level system access. They are a much better fit to "managed" languages where the programming language runtime removes a lot of low-level system control from you in order to have more implementation tuning headroom.
This is why Rust, which used to have implicit coroutines a long time ago, eventually dropped them. There are just not a very good fit for a low-level systems programming language.
But the first point of ergonomic differences is interesting too. It is essentially the eternal implicit vs explicit debate in action. Compared to implicit exception propagation as implemented in C++, Rust's monadic errors are widely lauded for how easy they make it to reason about error handling and understand foreign APIs, and reviled for how much more complex they make error handling integration, propagation and composition. It is one of these cases where you cannot have it both ways and must choose sides.
Hopefully that will have given you a bit more food for thought on the complex implications of this design choice, and why scoped continuations may not be as good of a fit for Rust as you think. In case you are interested in more discussion on this topic, I strongly suggest that you follow @steveklabnik's advice and go have a look at the long design discussions that already went on concerning generators and async/await. One good start would be to have a look at the comments on the Generator RFC PR, for example.
Now, onto some of your specific points:
Iterators are mentioned as a goal, yet instead of fn next() -> Option, the signature is incompatible right out of the gate.
There is a reason why generators are an experimental feature which will require a new RFC for stabilization. They are known to be imperfect and have not yet reached their final design, the point was to get a prototype rolling in unstable form so that the Rust community can experiment further with them and form more precise opinions about what the final feature should look like.
Two different return types from one closure? Err… what? Compare with C#'s yield or async, both of which just wrap a single type, instead of switching types in a temporal sense.
I certainly agree that this specific part of the generator design makes little sense to me, and IIRC that has been pointed out before during the generators discussion as something we may want to remove from the final design.
Doesn’t appear to be extensible to other more complex control flows.
This is where compiler integration will play a role. For example, the experimental async-await macros expose how "natural" control flow could be transparently translated to future combinators in order to produce natural-looking asynchronous code that compiles down to highly efficient state machines.
Is there a proposal for a rich library of implementations that compose elegantly, like in the linked scoped continuations article? Can I compose the concept of cancellation and async? Can I compose ambiguity and cancellation?
I think I will need some examples of specific problems that you want to solve in order to understand your question better.
Sure, it may be extensible enough to support forward-only iterators. Is it powerful enough a concept to allow an iterator to be bidirectional? E.g.: support not just next() but also next_back()?
AFAIK, "full" bidirectional iterators are generally problematic in Rust's ownership and borrowing model because they allow an iterator to produce two &mut to a single value in a row (via a next_back/next cycle), which violates the language's "no mutable aliasing" guarantee. Such iterators can thus only be of the streaming kind, where a client is not allowed to hold two values from an iterator at once. And this in turn hampers useful functionality like collect().
The Iterator trait has dozens of default implementations, providing a rich zoo of functions “for free” on top of just next(). What are the equivalents for generators? Is it going to be as rich? Can it be as rich as Iterators?
If generators are, as currently planned, extended to be allowed to produce something which implements the Iterator trait, you will get all associated iterator functionality for free.