I suppose this varies drastically from one application to the next in the context of it's purpose. However, I get this vibe that there are some advanced Rustaceans out there that have adopted a methodical process in laying out the architectural design of their application before a single line of code gets written. This could be for any number of reasons but a few that come to mind would be...
Drafting out how multiple consumer threads should be designed and any shared data resources they require.
How certain interfaces to various peripheral i/o (e.g: serial devices, PCIe, and other specialized interfaces) should be handled.
How to reduce unwanted efforts with fighting the compiler on various pieces of your application that may have the potential to cause this.
From my readings I feel there's a different/preferred "Rust" approach to this that applies more correctly to Rust's rules regarding ownership, lifetimes, mutable/shared references, etc. Although, maybe I'm just completely wrong here and only by experience can you acquire the right approach.
I'm getting to a point in my Rust learning endeavors that I want to think about how I would architect an application in Rust versus how I have approached it from C/C++.
Yes, you're right that there is something like the "Rust way" of designing programs that avoids fighting with the borrow checker (or maybe conversely, trying to write "the C way" in Rust doesn't get you far).
For the borrow checker, I don't know if there's any better way to learn these things than just by experience.
It's a matter of really "getting" the difference between owned and borrowed values (rather than confusingly similar by-pointer/by-value distinction of C that is a distraction in Rust), and
learning limitations of the borrow checker, so you avoid them instead of getting stuck: e.g. you can't have self-referential structs, you can't make iterator return reference to its internal state, and
learning a few quirky "programming patters", e.g. option.as_ref() is needed often, let foo = { borrow stuff here }; pattern is useful for limiting scope of borrows.
For parallelism it's actually easier. Structure your program to avoid shared mutable data as hard as possible. Then sprinkle Rayon all over the place. If it doesn't compile, add Arc or .clone(). If it compiles, it's good!
If I have a program that needs to do two or more things at once I'll usually isolate each responsibility in its own thread, spinning off the threads at the start of my program and then communicating via something like a channel or database. Usually you want to make them as self-contained as possible because that makes synchronisation easier and is generally better for a maintainable decoupled system.
How certain interfaces to various peripheral i/o (e.g: serial devices, PCIe, and other specialized interfaces) should be handled.
Traits!! Being able to abstract these things behind a trait makes life sooo much easier. For example, I've got one program which reads "messages" off an SPI bus. You can only really use this on the right device, but because I put the logic behind a trait (with something like a next_message()) you can easily swap the source between SPI, stdin (for creating *nix pipelines), or even a buffer in memory for testing.
Make sure you have a logical ownership hierarchy. Also, a lot of the time it's not necessary to be storing references to objects long-term, and there's no point trying to thread a reference to some base Config struct through your entire program when you could just clone it early on.
I hardly ever need to use lifetime annotations in my programs, despite the fact that I often do performance sensitive things like talking to hardware or heavy computations.
In general, try to play around with the various traits in the standard library. Once you learn how to properly use things like AsRef, Into, Deref, ?, and FromStr it often makes your code flow a lot better.
Also, rayon is awesome! If you have a trivially parallelizable problem, just throw rayon at it
Another tactical advice is to understand interior mutability and to understand that the real difference in & vs &mut is not mutability, but the guarantee about aliasing.
In particular, the choice between & and &mut often should be dictated by the calling side, and not by the requirements of the implementation. For example, if your application have a DB struct representing a database, there may be two different design choices:
Any thread in the app can write to DB at any time. This means that you have to use &self signature on the DB::write method, and that you quite probably will need a Mutex to actually implement a DB.
There's a dedicated thread for writing the data. In this situation, you can use &mut self for DB::write, and this will help you to make sure there's indeed one place that writes the data.