Received my O’Reily book on Rust Programming and have been pumped from the start in my readings. I’m trying to code as much as possible through my readings and clear my confusion from the start. I have a few questions on some comments that the author makes in Chapter 2 which is really a slight intro to Rust. I have much yet to explore but I wanted to clear up some confusion.
In the section where it introduces some advantages of currency over C/C++ it gives the following bullet point.
- “Rust ensures that you can’t access the data except when you’re holding the lock, and releases the lock automatically when you’re done. In C/C++, the relationship between a mutex and the data it protects is left to the comments.”
I’m not understanding what it means by comments.
I’m a bit new to multi-threading (in C/C++) but I understand the concept of a mutex so is my understanding correct that in Rust it will automatically release the mutex for you and in C/C++ you have to manually do this yourself?
Lastly, there is another statement below that sounds like a very strong statement so I wanted to verify.
- “No matter how elaborate your program gets if it compiles, it is free of data races. All Rust functions are thread-safe”
What does it mean by “Rust Functions”? Is a Rust function in the context of a function in a crate written by someone else? Is a Rust function a function in it’s standard libs? Is a Rust function a function that I write in my own context for wrapping up functionality? Or is a Rust function all the above meaning any function statement starting with “fn” written in Rust?
I’m assuming from this point on Rust is hands down 100% excellent at writing thread safe code and if it compiles I’m free of data races and is hereby announced “thread safe”?
Thanks guys for your help
They mean that the language has no built-in annotation to let you know these properties, so the only way you can let people know is via a comment in the code or docs.
A function written in Rust.
Unless you’ve used
unsafe code that has a bug, yes. Guaranteed data-race free if it compiles.
Rust has a rule that for every piece of data you can have either multiple read-only references, or a single read-write reference. This is enforced at compile time by the borrow checker.
This makes data thread safe, because either only one thread writes to it, or multiple threads only read it, and there’s never* a situation where two threads uncontrollably overwrite each other’s writes.
*) you can opt out of it with
Atomic types and such, but it’s done explicitly, and not by accident.
As for locks, in Rust’s standard library there’s
Mutex<T>, which owns the data (
T) “inside” itself and won’t let you access it without locking. This means you can’t accidentally use
T unlocked. And Rust won’t let you use data from multiple threads without wrapping it in
Mutext<T> or similar thread-safe wrapper.
This is in contrast with other languages where you could have
T as two separate independent fields or variables, and just had to remember yourself to use one before the other.
These features massively help avoiding the worst concurrency bugs, but don’t take it as a guarantee of bug-free code. You still have to be careful not to create logic bugs that cause deadlocks or race conditions on higher level than direct memory access.
It should be pointed out that
data-race has a well-defined meaning that might be a bit narrower than what some people might expect. In particular, it does not encompass
race conditions, see https://doc.rust-lang.org/nomicon/races.html.
Forgive me for my confusion, but would you be so kind to elaborate on a few questions I have in the above quoted block of yours.
What do you mean by “inside” itself and what is “T”? I apologize for these upfront newb questions. Soon here I’ll be past chapter 2 so hopefully I can dig in to the more detailed descriptions.
This might be a stupid question but how does Rust know what data is shared versus data that is not (private to or only operated (read/written) within a function or threads scope)? (Hope I worded that right).
If I’m reading data that is being shared lets say between two threads and I don’t wrap it with a Mutex then what will the compiler tell me in this case.
Lastly, from someone else’s reply on here they say Rust will automatically unlock the Mutex for me so what do you mean when you say "wrapping it in a Mutex?
It’s stored inside the same way elements are stored inside an array. Muted is like a special 1-element array.
Rust knows when and where things are shared, because this is part of the type system and lifetimes. You have & and &mut references, and Send and Sync traits implemented for data types which describe to the compiler how they can be safely shared.
Working with mutex isn’t entirely automatic. You still have to call lock() on it, but then there’s a bit of magic (RAII and auto Deref) to unlock it again.
Where can I find more detailed documentation in general (not just on mutex’s) from a perspective of how the compiler works?
Have you finished the Rust book chapters on ownership, references and lifetimes? That should help you understand the basic rules.
If you want to understand the implementation from the compiler perspective, look for blog posts around implementation of Non-Lexical Lifetimes feature in Rust. But be warned that’s very deep analysis of the problem and requires advanced understanding of the lifetimes before you can understand how the compiler actually implements them.