Lack of an exhaustive rust book

I am reading the Rust book, but it lacks detailed content on various topics. I have found some but for the rest the only option is to follow the documentation which doesn't help since it doesn't guide you or else you have to check open-source code at GitHub.

macros (tutorials and books available)
build scripts (cargo book)
async/futures (Async book)
tokio (tokio.rs)
serde (serde.rs)
threads (Rust book)
extern repr C code and asm
Unsafe Rust (Rustonomicon)
Managing cargo build system using cargo.toml (cargo book)
List of available traits in rust std (cole-miller's reply)
FFI (Rustonomicon)
How do mpsc channels work? (synchronous vs asynchronous) (Rust Book)
Rust CI/CD (cargo book)

If you guys happen to find book-like resources for the unchecked items, let me know.

4 Likes

For serde at least, there's serde.rs.

2 Likes

Updated the post

tokio.rs has a very good tokio tutorial.

Not sure why jemalloc is mentioned here. It's a memory allocator, not a thread runtime. As for rayon, it works on top of system scheduler, so it's simply a level of convenient abstractions, nothing more.

1 Like

Updated the post. It should have been asked as a separate thread. My bad.

The Tokio tutorial has a page on mpsc channels. Most of what it says about them should transfer relatively easily to synchronous channels.

There's a cargo book, and the build script is a feature of the cargo.

2 Likes

It is not about how mpsc works though. I want to improve runtime performance using async.

AFAIK, it's impossible to improve performance just by switching from sync to async mpsc channels (it may easily slow your code down). Improvement may come when the code as a whole is performing better when written in async fashion, which will in general require changing of the channels implementation, since you must not block in async code.

My first order approximation when thinking about this is that asynchronous operation will be a performance boost if one has thousand/millions of events to wait on, each of which takes little processing to satisfy. For example hits to a web server, which can spend the bulk of its time shunting requests from HTTP requests to database requests and spending the bulk of any user request handling time just waiting for things to happen. All that context switching can be very expensive on CPU cycles and memory when done with synchronous threads.

Conversely, a purely async solution would not be able to use all the cores on a machine. Which means computation heavy work would not be exploiting the full capabilities of the machine and performance world suffer.

Things seem to get murky when an async abstraction can actually make use of pools of threads, running on different cores to get the work done. As in Tokio. I have read hints here and there that async can increase the performance of compute intensive parallel applications, think HPC across multiple machines, where somehow using async allows work to be scheduled more efficiently on the cores/machines as they become free for use. All that is beyond my ken though.

1 Like

^ This. I am currently using tokio or actix abstractions for async operations or threads for allocating system threads. But I am open to learn about the fundamentals if it will help improve my understanding.

There is an FFI section of the Rustonomicon.

There are CI examples available here in The Cargo Book. They seem to just be pulled from the docs of their respective platforms, though. The platform's docs will always be the definitive source as they are not part of Rust.

The Book has a walkthrough for creating a thread pool for a web server and the docs for std::thread seem appropriately thorough. Perhaps you're looking for something else? For what it's worth, the already-mentioned rayon has nice docs, a great design and is probably the easiest way to use threads. It offers concurrency through a hidden thread pool without having to "buy in" to the potential bloat of a big library. You don't even need to import and use a new type, just bring the traits into scope.

Channels, message passing and shared-state concurrency are addressed in The Book. The docs for std::sync::mpsc have explanations and examples as well. The "synchronous vs asynchronous" topic is covered in the first section, but it is relevant to the module's API and not the general concept. The crossbeam library has docs for its API that include examples. At this point crossbeam is one of those libraries that pops up in tons of projects. It should at least be mentioned if you're learning about mpsc channels in Rust because of its better performance and extra features, but its channel API is based on the example set by the standard library.

You won't find in-depth discussions of async performance in a library's documentation. The potential use cases for non-blocking code is a topic that comes up in every language that joins the async foray. But if you want to assess the performance of your Rust code, there is a book with tips for benchmarking and general performance-related information.

1 Like

I have updated Rust book as the resource as it seems no other better material seems to be available. Now only linking C code and assembly code remains.