Why is compilation time such a big deal for you?

I feel like we might need a fine-grained survey to tease apart all the different requirements, since there seem to be strong correlations and intersectionality between:

  • most painful use case: cold release builds vs cold debug builds vs incremental debug builds vs incremental test running vs cargo check vs rust-analyzer/IDE responses
  • what machines rustc is being running on (e.g. would making rustc "more parallel" help, or are there no cores left to exploit?)
  • what does -Z self-profile show?
  • how many LoC is the project?
  • how many transitive crate dependencies does the project have?

It sounds like the "typical" Rust user is mostly bothered by either incremental test running or cargo check taking several seconds to a few minutes when they should be closer to a few seconds, while several significant minorities like gamedev and embedded consider cold/incremental debug builds to be the biggest pain point. It seems like nobody is that bothered about cold release builds or RA/IDE response times, which is making me wonder if focusing on cranelift for debug builds would provide the best ecosystem-wide ROI. But that's still just eyeballing anecdata.

1 Like

Have you heard the good news of our government robot and savior, bors? Instead of waiting for CI to run so that you can merge it, you tell it that the merge request seems okay to you and it merges it automatically when the build is done.


I think, in a way, that's actually what I do in Rust -- but the "test" I'm running is cargo check :upside_down_face:

Which actually is a work-around on two levels — in a statically typed language, you can expect to see compilation errors in an IDE immediately.

Unfortunately, I think bors is a non-starter for us because it's a GitHub tool, while my company uses GitLab for all our proprietary code. I've seen how it's used in other projects though and think it's awesome!

The "merge" part is not normally an issue because GitLab gives you a "merge this PR when CI finishes" button. I find the problem comes when you need to know a particular build passed, for example when merging in the last change before cutting a release, or because CI does different/more tests than you can do locally (e.g. it tests on a Windows machine while you develop on Linux).


Compilation times are not much of a concern since I bought my most recent fast machine, except for tests. In that case, it is not compile times, but linking times that I find annoying.

Suppose I am working on a 31k-line library, with 5k lines of tests, broken into 54 separate binaries. (This is not very hard to suppose at all.) If I make a one-line code change to said library, the incremental compilation is pretty much instant, and that's cool.

However, linking those objects to all the binaries is surprisingly slow to me, coming from a C background. Even if I create a dev profile in Cargo.toml and set lto = false, it can only link 3-4 binaries per second (EDIT: on a 4-core machine). That's enough for me to start tapping my foot, or wondering if something is being rebuild that shouldn't be.

Also, as a side comment:

I have just the thing for you, good sir!

I can't find it in 15 minutes, but I remember seeing a presentation a while back where a research team had written a new instruction selection pass for LLVM. It had a 100% guarantee of selecting the mathematically ideal instruction stream every single time!

Why didn't they upstream it? Because that perfection takes to 5-20 times longer, and they didn't want to inflict that on most users.

You might find it worth looking into using that out-of-tree LLVM version with Rust! :smile:


This reminds me of LLVM super-optimization, though you're probably referring to something else.

If you aren't already, you may be interested in trying out linking with lld.


Interesting, @CAD97. I saw that a while ago, but forgot Rust had shipped that!

I tried it putting a symlink to rust-lld that comes with Rust (since I am on a GCC based Linux), and put this in my .cargo/config file:

rustflags = ["-C", "link-arg=-fuse-ld=/path/to/symlink/ld.lld"]
linker = "clang"

It built the binaries correctly, but did not seem to make much difference in linking times.

That is about the same time frame, @lxrec, but is indeed not what I am talking about.

However, your link was still helpful, because it helped me find something very close, if not the original: an LLVM compiler plugin for a combinatoric scheduler on GitHub. Unfortunately, they seem to only support LLVM 6, which Rust took out of tree (if not completely dropped) recently.

Thanks, this describes my experience extremely well when I work with C++ instead of Javascript or Python. The slow feedback cycle is really bad for productivity and it takes a very long time to get used to not having feedback at your fingertips.

My Rust projects have so far been small, though I also find myself waiting for the compiler there more often than I would like.

I wish we could have an interpreted version of Rust that would run programs maybe 50 times slower than the compiled version — but with 100 times better latency.

I see Miri exists, but since it's an interpreter for MIR, I don't know if it gives the kind of speed up I'm after?

1 Like

Just to throw my 2 cents out there. Fast compilation is nice to have, and slow compilation is an annoyance. The most annoying IMHO is after updating the compiler, which forces a clean build. With large projects this can take several minutes. I haven't experienced any build times of hours, but it's probably not unheard of.

Also, obligatory https://xkcd.com/303/ reference.

If it's possible to put all tests into one binary, such as moving all #[test] modules into a single workspace crate, I'd recommend doing so as it's saved myself a lot of time and disk space.

I didn't really feel the pain from compilation times when developing something natively, but when porting a project to support WASM, I'd tend to have to build support into dependencies. That means large portions of the project to need rebuilding, so not a lot of time is saved from incremental compilation.

When I'm debugging or experimenting, I often make many small little changes, which have to be tested by actually running the program. Each small change requires a recompile, which is no fun.

Anything over 1 second I would consider long. Of course it's unreasonable to expect a compiled language like Rust to be that fast, so I put up with it (my compile times are usually 1-5 minutes long).

It obviously hasn't stopped me from using Rust (which is by far my favorite language), but it definitely is annoying.

At my job we have a large TypeScript project which takes 5-10 minutes to compile, so I don't think Rust is particularly slower than other compiled languages.

1 Like

It depends on the language. My second favorite language after Rust is Haxe which can compile even large programs in 12 seconds ( on a re-compile ). Also, a big competitor with Rust is Go ( not that Go can do everything Rust can, but there are plenty of programs you might write in Go that you could write in Rust ), which also compiles in seconds.

1 Like

From personal experience, I've found that as a rule of thumb, compiling Rust takes about 1 second for every 1,000 lines of source code.*

However, I don't find myself compiling too often. RLS provides me with near-realtime feedback even for fairly large projects.

Using RLS, I don't mind 10 second debug build times, as my code will usually compile cleanly if RLS indicates no errors are present. I only really compile release builds myself after the debug build is working and I'm optimizing performance - I don't really mind checking in my code for a CI release build while I work on something else.

*Of course, this 'benchmark,' like all benchmarks, has flaws. On my machine, A 10,000 line project takes around 10 seconds to compile (after you factor out the time taken to compile the dependancies), and a small 1,000 line project will compile in about a second.

For me rust compile times are tolerable for small projects, although I would certainly like them to be faster.

Where compile times really hurt is for large projects though. The only really big project I've tried to contribute to is the rust compiler/standard library, and I have to say that the biggest reason I haven't contributed more, is the compile time. On my computer it takes over an hour just to finish the first stage. Granted, for changes to core or std, once I've gotten the first stage compiled I can iterate more quickly, but when I start working on an task (and most of my changes have been small), waiting that long is painful. And I usually just rely on CI to compile past the second stage and run most of the tests, because that would take hours on my machine anyway.

I have more experience with really long compiles with scala (at my day job), another language known for slow compilation. And let me tell you it's a pain. Incremental compilation helps a lot, when you are making changes in the leaves of the dependency tree, but if you make a change in something that everything depends on, it's basically the same as doing a cold build. And with a really big project with millions of LOC, builds can take hours. Longer compilation means longer until QA can test it, and longer until it can be deployed.

Related but not identical to compile time, is compiler resource usage. If compilation takes more time that either means you either can't run as many builds on your hardware, or you need more hardware to run your builds in a given timeframe.


This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.