Is there any way to make compilation faster?

Can't say I really mind the wait, and in certain aspects it actually makes me focus more on making sure my code is correct before I 'cargo run' it - but is there any way to make it faster?

Correct me if I'm wrong, but the way C / C++ compilers tackle the process is is by compiling the files into machine code beforehand, and at the time of compilation, they simply 'glue' them together. Is my understanding completely off the point? Or is there anything that prevents Rust from doing the same?

You can use cargo check, which doesn't produce an executable, but only runs checks. Coupled with cargo-watch you can get auto-updated compiler output too, or with an IDE plugin like RLS or rust-analyzer you can get the errors shown in the editor.

1 Like

rustc does a lot of computation in its quest to prove that your program is memory-safe, thread-safe, etc. The Borrow Checker is a SatSolver constraint solver that attempts to find lifetimes for all data in a way that is compatible with your code. The LLVM backend uses Rust's declarations of exclusive vs. shared access to improve its optimization vs. the typical C/C++ program.

Update: rustc uses a constraint solver, which is simpler than a SatSolver. See this recent post.


Thank you, but what I'm asking is regarding compilation, not pre-checks and errors - the only way to make sure the program runs correctly is to run it, and to run it, it needs to be compiled

Is it absolutely necessary to go through the all of the libraries each and every time to do this?

The reason why this question popped into my mind is because, after a while, looking at the compiler going through the same libraries, over and over again, each and every time a small change is made, or another library is added, naturally makes me question whether it's absolutely necessary

If it is, I'm more than happy to deal with it. I just wanted to make sure there's no way around it

Some libraries contain macros or traits or generics that the compiler must read, interpret and then use in its further processing. Obviously if you reference those libraries, or other libraries transitively reference those libraries, the compiler has to process them to reason about their contents. This is somewhat similar to C/C++ .h files, but the processing is a lot more complicated (e.g., proc macros used in generic trait definitions).

Other libraries, without any macros, traits or generics that you use, can be cached after initial compilation.

In general, rustc takes more time than compilers for languages with less-strict compile-time checking. Much of what rustc does is novel relative to mainstream compilers. Those parts of rustc have not had the 100s of person-years of optimization that are found in typical C/C++ compilers.

The recommendation to use cargo check and cargo-watch is to do a quick screen for errors so that you can get about fixing them with little delay. When everything checks, then it's time to turn the big gun, rustc, on the code and see what other errors it finds. You will find that you spend more up-front time in Rust than in other languages. Most developers find that is more than compensated by the reduced time debugging, and even more, by the lack of midnight calls when a problem develops in the field (e.g., at a customer site).


To me, it's still faster then testing with all those sanitizers on. But yet the sanitizers can only catch the bug if we really hit them during testing, while the static analysis the rustc performs can prevents all kinds of UBs if all the unsafe codes are correct.


Only modified libraries need to be recompiled, unless there's some misconfiguration or a bug in Cargo.

Additionally, rustc is also incremental within individual crates, and will not recompile and typecheck functions that have not been changed since the last time it ran.


This isn't true, there's no SAT solving involved in any of rustc (*). Lifetime inference amounts to collecting constraints into a graph and then assigning lifetimes that satify them by expanding every lifetime until all of its constraints are met. This is vastly more efficient than SAT solving, which is NP-complete.

(*): Modulo whatever happens in LLVM land, of course

1 Like

… unless you clear the cache before each run.

1 Like

It was my impression from reading Niko's blogs and other material on NLL that in essence rustc was using a highly-constrained SatSolver. If it's just a simple minimization or maximization process, then of course that will be faster in the limit for complex constraints.

Understatement of the year. Haven't used a debugger in 2 years after switching to Rust. I'd rather pay with time and wait for rustc to do it's magic, than get some unwanted magic for free at run time.

1 Like

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.