Demystifying Async/Await - sort of ;)

Thanks for the discussion, and the book. I am trying to follow the Rust community and the language in itself, especially everything concurrent. But alas, I have not coded a line of Rust! And to be honest, not read all the words above, either. (But I have censored papers on Rust.)

The purpose of this comment is to tell that there also is "another half of the field", the synchronous. I will refer to matters out of scope for Rust per se - not meaning to harm or discredit any of the above - which seems right in its own context.

But I do have experience with languages that support concurrency. Namely occam 2 in the nineties, go (some, when it arrived) and now I do XC. Their concurrency (and even parallelism for parts of occam 2 and XC) all have a theoretical foundation in Communicating Sequential Processes (CSP) process algebra.

Those languages are basically synchronous. Should one need asynchronism it's built on top. If I keep libraries out of the reasoning, in occam and go I would, in order to add asynchronism, add extra PROC or goroutines, in XC it would be by the help of a so-called interface, with several patterns being possible, to get help from the compiler. But under the hood it's synchronous or state machines or locks or use of different types of tasks (standard, combinable and distributable). For the latter one can also set time constraints by pragmas, it's possible to build multi-core (or rather multi logical core) deterministic processing. From a log: "Pass with 14 unknowns, Num Paths: 8, Slack: 400.0 ns, Required: 1.0 us, Worst: 600.0 ns, Min Core Frequency: 300 MHz." When other tasks are also running.

Some times asynchronousity is necessary (most often close to I/O) and some times it's nice, but often synchronous systems are just as fast and responsive. Some would argue: safer. And reasoning seems to be most often done when having synchronous systems to reason on (like CSP).

Blocking is ok, not pathological if it doesn't let other jobs pathologically wait. So, it often has to do with parallel granularity (not my phrase). Have two threads in a large system and ten independent jobs, then blocking does not seem acceptable. But with 10 threads (task, processes) and 10 jobs reasoning seems easier.

I have over the years blogged quite some about this. Disclaimer: no ads, money. gifts etc. So I take the opportunity to link a few relevant here.

The first is Rick Hickey's lecture on core.async: Clojure core.async at my blog note on clojure core.async.

I have a note where I try to go discuss what "blocking" might imply, see Not so blocking after all.

Then I compare a lecture by C++'s Herb Sutter and go's Rob Pike: search for "Pike & Sutter: Concurrency vs. Concurrency" in my TECH NOTES. (Also commented by Herb Sutter).