I see a lot of people complaining about it, but from my experience so far it's not a big deal at all.
I'm wondering if I'm in some sort of exception from the average user. So far, caching worked fine, other than than a fresh compilation its almost unnoticeable, I iterate just as fast (or slow ) as I do in other languages.
Is there a specific context where this is a constant pain ? Maybe some CI setup or does it need to be a large project ? Does it make it much harder to deploy ?
I'm wondering why it seems to be such a big deal for so many people ?
I see this same problem with C++ projects. Those of us who spend the time to carefully select components for our dev boxes get good build times. Those of us who just buy a laptop but ignore compiling benchmarks during their selection struggle.
It's survivorship bias for the most part. The majority who are fine with compile times have better things to do with their time than complain about the language or the implementation on online forums. You'll only see the posts here that say "hurr durr Rust compile slow". Nobody will write posts like "I'd like to report that Rust's compile times were OK today and yesterday and the day before".
To be fair, there can be situations in which it's a legitimate concern. I mostly run into it when building Docker images – if caching doesn't work for some reason, recompiling every dependency of a big project upon every single
docker build is usually not feasible. But this is the exception rather than the rule. The tooling works just fine in an everyday development setting.
I find it mostly gets bad when my dependency graph gets huge. The worst is when my dependency graph includes bindings to large C++ projects. I was using RocksDB in my project for a while but then replaced it with something Rust-native because I was annoyed by how much it killed compiling time. But also this was a project that had enough computational needs that I set optimization on high even in debug mode, and my CPU is 10 years old... But still, comparing to something like say Java that can lean on its runtime more, when the dependency graph gets big, the compile times can get annoying.
The dependency tree is the primary contributor to slow compile times, in my experience. Most reports conclude that there are specific situations that cause poor performance, such as monomorphization. Given a large enough dependency tree, the project has a significantly greater chance to have code which falls into these specific categories that are slow to compile.
It is not always possible to reduce or eliminate large parts of the tree. But if you can, it may prove to be effective at addressing the problem. The fastest code to compile is the code you don't have to compile.
I had a little bit of an epiphany relating to build times a few years ago.
We have a custom package management system for building embedded platforms. The tool to actually create installation packages and extract them into filesystem images was written in plain C. At some point we decided to rewrite it in C++ (it made sense at the time).
One of the issues we had was that some of the packages we used could not cross-compile, or would not cross-compile correctly, so we have a native ARM-based development system that we use to build these problematic packages. In order to construct packages for these builds we built the package management tool on the ARM board.
Building the old C-based package management tool on the ARM board took just under one minute. The C++ version is a pretty straight port; it used so-called "Modern C++" (at the time), but didn't add any new functionality (or remove any) from the C version, but despite that it takes just over 15 minutes to build.
On a modern x64 system you can barely notice the difference in build times; both the old C and new C++ code builds in mere seconds -- the difference doesn't really matter in practice. It's just that when you build them in a limited resource environment, then the difference can become quite massive.
My epiphany was basically this: C++ and Rust are slow to compile (compared to C), it's just that we normally don't notice it because our regular computers are so fast.
Is the performance of the compiler so slow that it warrants pouring resources into making it faster? Well the Rust compiler people seem to think so; there's a team working on the rust compiler's own performance.
But for 99% of my own daily work the "slow build times" aren't a problem in practice.
Yes, it is slow.
It can be made tolerable in many situations, e.g. if you split your project into smaller crates, have a fast linker, then a small incremental rebuild on a fast machine is just a few seconds.
But the slowness shows, and does require workarounds. Bevy engine has a special linking mode to speed up development builds. In CI you'll probably need to setup sccache. For some projects CI builds are so slow, they have to use merge queues.
I have a few projects that take over 30 minutes to build in release mode. They power web services, so the long rebuild times are a potential risk for downtime. If the web service needed an urgent fix, I couldn't deploy it faster than in 30 minutes. If I don't get it right on the first try, I'm going to have 1h or 1.5h downtime…
On my last project, I spent over a month refactoring the codebase to essentially pull it "inside out" and break it into more crates to improve build times. It... helped... a bit. Yes, Rust is slow. I'd say that, for the most part, that slowness is a consequence of the work it's doing, and not a lack of optimisation or effort on the part of the compiler devs. But there's definitely still room for improvement (improving
rustc's output to LLVM, integrating a faster backend like cranelift, making blood sacrifices until The Old Ones grant us a fast linker on Windows, etc.).
Also, I just want to gently caution against making arguments that stray into "stop being poor lol" territory. Not everyone can afford threadrippers and large amounts of RAM. There's a degree of irreducible complexity in a language like Rust that no amount of optimisation will ever offset, but that shouldn't justify simply not caring.
Is it a big deal? Not for me so far. Because:
Typically I don't do a full recompile very often. I can be writing code all day and rely on rust-analyser in VS Code to put red squiggles under things when I am going wrong (which is all the time). Or use `cargo check´occasionally.
Or when I feel the need to actually run and test something new only an incremental build is required which is plenty quick enough.
One pain point though is pulling my creations down to their target system, Raspberry Pi or Jetson Nano or the like, there the build is annoyingly slow. But again that does not happen so often, I can generally test most of what I'm doing on the MacBook Pro I use.
One embarrassment was demoing something to my business partner on a Jetson Nano. He is a Python head and former Free Pascal user. As such that long compilation was a bit shocking to him. As we waited for the code to build on the Jetson I could see on his face that he was wondering why on Earth am I doing that...
Next time you meet with him and see his Python code crash on some edge case that could have been prevented by a static type system, feel free to produce the same surprised/shocked facial expression.
How would that happen? People are trained to accept crashes.
The whole program collapses if you type
https: ? Just use copy-paste. We may fix that in a year or ten. is the norm, not exception.
Luckily I have no worries there. We had that discussion a couple of years ago. My partner is an EE graduate with a masters. He has done a lot of work on "serious" software since then. So despite being much younger than me he is not a sloppy, agile, code it and run, web dev. He holds to traditional values of reliability, predictability, robustness, quality in general. Which is just as well as we are working on some quite critical control systems.
In fact we recently had a discussion about deeming the control system he has been working on in Python for the last year a "prototype" or "pseudo code" and how it would be great to replicate that in Rust for production.
So while I have sold him on the Rust idea I'm not sure I'll ever get him into working in Rust. That is fine for now, he is the one with the best domain knowledge around here and it's better he spends his time thinking about domain level problems.
Because the compiler multi-tasks, as far as compiling component crates, it can sometimes cause system instability when working with large projects due to overuse of system resources.
I am using a Raspberry Pi IV, and am having little but joy.
Including compile times which have gotten faster
I'd like to report that Rust's compile times were OK today and yesterday and the day before.
I'll keep you posted.
And this is how the TWiR Quote of the Week permanently became "compile times were mostly fine this week".
I think the compilation itself is fine, and maybe even better than the situation for C++. Especially it has good built-in profiling support
cargo build --timings.
But it indeed needs some experience to speed up for large projects.
sccache helps a lot. (Stupidly effective ways to optimize Rust compile time - XX’s Blog) There are still stuff like feature unification, build-dependencies, test targets that require special attention when your project gets very large...
For our project, GitHub - risingwavelabs/risingwave: The streaming database: redefining stream processing 🌊. PostgreSQL-compatible, highly performant, scalable, elastic, and reliable ☁️., the slowest part is now:
- C++ dependencies (haha) (rdkafka-sys, protobuf-src, zstd-sys, etc) ... that cannot be cached by
sccache. They are also very slow to compile.
- For release build, LTO takes the most of the time, even we use some faster linker like
I would say it depends on your perspective. If you are used to a big c++ project, then Rust isn't that bad. And if you can live with all the workarounds and hacks (caching, incremental builds), it is also not awful.
However, compared to a lot of other things it is incredibly slow. For example, there is no way I can use GitHub actions on a private project because the included minutes would be spent in a few days. A 2 minute build on my laptop easily takes 40 minutes on GitHub. The caching works poorly as it requires a full build every time I change Cargo.lock, which can be several times per day.
Also, even with all the caching and incremental stuff, every time I save it takes rust-analyzer several seconds before it is back online, which is pretty bad.
Still, I say it is worth it as I don't know a better language. But I really hope some effort is spent improving the situation. A full rebuild should be 30 seconds, not 10 minutes.
It feels faster than waiting for Python programs to crash at runtime!
If you have time, I am interested in hearing more about this. I recently attempted something where I tried to crush the "depth of workspace"
there is nothing I can do about external deps, and I don't modify them, so they are no-op for incremental rebuilds
within my workspace, "high / deep" tree = less parallelism, "shallow / crushed" tree = more parallelism ;
So I spent some time with various techniques to "shatter" crates and crush the "depth of workspace"
I think I shaved compile time by 50% , but at the cost of losing lots of inter-module optimizations (which has not hurt me so far).