Why is compilation time such a big deal for you?

I mean modularization beyond the good practice, as in "this module ideally belongs in this crate, but I'll move it to another instead, so it will get recompiled less often".

I mean the time it takes to release a fix to production. It doesn't imply the production code is bad or untested. The world changes around code, so there always may be need to react, e.g. a 3rd party API may be down and I may need to switch it to another ASAP. Or my site may be getting spammed and I'll need to add tighter limits.

If I can build code quickly, I can rely on being able to be agile with the code itself, which to me is easiest and most flexible solution. But if I can't build the code quickly, I have to put more effort in ahead-of-time prevention, develop other ways of making changes quickly. This means that build time has a ripple effect that affects designs and development costs.

7 Likes

Compile times are a very non-linear scale for me. Below a certain amount of time ~15 seconds, barely notice them, but above that I have time to get distracted and my productivity drops. Luckily most of my rust projects are small enough for that not to be a problem

2 Likes

Compile times can be a big deal in game development, where there is no way to know how the game will play without re-compiling and restarting the game. Games are one of those things where it isn't necessarily a bug that makes you re-compile the program, but the fact that there is no way to know exactly how the physics, rendering, feel, or "funness" factors are going to be effected by the completely valid logic of the game.

This means that when writing games, writing bindings to scripting languages such as Lua or providing some method of scripting that lets you compile tiny Rust programs or something like that that you can compile/run very quickly becomes important and a large influencing part of the design of the game or engine.

Granted with a game you are probably going to want to go as far as hot reloading and such, which makes you consider those design decisions around compile times and scripting anyway, but even the simplest Pong game was taking 30 seconds to 1.5 minutes to re-compile on my laptop which made it very difficult to experiment and learn how the engine worked.

12 Likes

Wow, I just came to ask something about compilation time optimization so I can tell why it's already boring for a few thousand LOC project. :slight_smile:

I realize some people say it's not a problem "because RLS is fast". So they seems to rely on compiler as a checker.

It's OK, but if you want to check your program behavior, you have to execute it, sometimes very often per day. In video game/UI programing/embeeded, your program could compile but there can be bugs inside your game/UI/electronic. And yet, we have incremental compilation.

Even for C++ project, you sometime have to wait a lot for CI. I never tested with rust yet but having CI with a big code base in rust looks scary.

I'm very new to rust so I still have to learn good habits, but compile time seems to increase quite fast on my tiny projects. Having a compile profiling would be interesting.

Keep the good work! :heart:

1 Like

If you mean the Rust teams profiling rustc's performance on a standard benchmark suite before and after each change to its source code, that would be https://perf.rust-lang.org/. It's already standard to block merging PRs on a "perf run" to ensure there are no unintended regressions.

If you mean you yourself profiling rustc's performance on an ad-hoc basis against whatever code you're trying to build with it, I believe the latest and greatest in this space is the new-ish -Z self-profile flag.

You might also be asking about PGO: https://doc.rust-lang.org/rustc/profile-guided-optimization.html

Hopefully it was one of those :sweat_smile:

6 Likes

I like longer compilation times because it makes me feel my binary is more high quality once it's done. If I could increase compile times and leave it on overnight, I would.

10 Likes

I have a ~150K line Rust project. It takes > 10 minutes to build. This really hurts my productivity because there's a significant amount of time I'm just sitting around waiting for the build to finish so I can run tests. (I'm still glad to be using Rust, but a fast compiler would make it so much better).

4 Likes

Therefore I think a truly fast Rust compiler (which basically means, to me, something that can parallelize the end-to-end build effectively (up to 32 cores say) in the presence of long chains of crate dependencies) would be transformative for many existing Rust users and for expanding the scope where Rust is appealing.

It's such an important problem but I guess it would take a significant commitment of resources to pull it off or prove it's genuinely impossible.

One of the most promising ( to me, who doesn't know much about it ) ideas I'd heard for the compiler was to use the WASM target and cranelift to generate machine code quickly for debug builds, but I don't think its really close to being a possibility at the moment ( just from what little I've seen while browsing GitHub ).

I'm guessing that part of the limit to Rust's compile speed comes from the fact that it uses LLVM as the compiler backend, which is something that the Rust community doesn't maintain ( and isn't written in Rust ). Using cranelift provides a 100% Rust solution, even if you still wanted to use LLVM for the release builds.

I feel like we might need a fine-grained survey to tease apart all the different requirements, since there seem to be strong correlations and intersectionality between:

  • most painful use case: cold release builds vs cold debug builds vs incremental debug builds vs incremental test running vs cargo check vs rust-analyzer/IDE responses
  • what machines rustc is being running on (e.g. would making rustc "more parallel" help, or are there no cores left to exploit?)
  • what does -Z self-profile show?
  • how many LoC is the project?
  • how many transitive crate dependencies does the project have?

It sounds like the "typical" Rust user is mostly bothered by either incremental test running or cargo check taking several seconds to a few minutes when they should be closer to a few seconds, while several significant minorities like gamedev and embedded consider cold/incremental debug builds to be the biggest pain point. It seems like nobody is that bothered about cold release builds or RA/IDE response times, which is making me wonder if focusing on cranelift for debug builds would provide the best ecosystem-wide ROI. But that's still just eyeballing anecdata.

1 Like

Have you heard the good news of our government robot and savior, bors? Instead of waiting for CI to run so that you can merge it, you tell it that the merge request seems okay to you and it merges it automatically when the build is done.

7 Likes

I think, in a way, that's actually what I do in Rust -- but the "test" I'm running is cargo check :upside_down_face:

Which actually is a work-around on two levels — in a statically typed language, you can expect to see compilation errors in an IDE immediately.

Unfortunately, I think bors is a non-starter for us because it's a GitHub tool, while my company uses GitLab for all our proprietary code. I've seen how it's used in other projects though and think it's awesome!

The "merge" part is not normally an issue because GitLab gives you a "merge this PR when CI finishes" button. I find the problem comes when you need to know a particular build passed, for example when merging in the last change before cutting a release, or because CI does different/more tests than you can do locally (e.g. it tests on a Windows machine while you develop on Linux).

3 Likes

Compilation times are not much of a concern since I bought my most recent fast machine, except for tests. In that case, it is not compile times, but linking times that I find annoying.

Suppose I am working on a 31k-line library, with 5k lines of tests, broken into 54 separate binaries. (This is not very hard to suppose at all.) If I make a one-line code change to said library, the incremental compilation is pretty much instant, and that's cool.

However, linking those objects to all the binaries is surprisingly slow to me, coming from a C background. Even if I create a dev profile in Cargo.toml and set lto = false, it can only link 3-4 binaries per second (EDIT: on a 4-core machine). That's enough for me to start tapping my foot, or wondering if something is being rebuild that shouldn't be.

Also, as a side comment:

I have just the thing for you, good sir!

I can't find it in 15 minutes, but I remember seeing a presentation a while back where a research team had written a new instruction selection pass for LLVM. It had a 100% guarantee of selecting the mathematically ideal instruction stream every single time!

Why didn't they upstream it? Because that perfection takes to 5-20 times longer, and they didn't want to inflict that on most users.

You might find it worth looking into using that out-of-tree LLVM version with Rust! :smile:

3 Likes

This reminds me of LLVM super-optimization, though you're probably referring to something else.

If you aren't already, you may be interested in trying out linking with lld.

3 Likes

Interesting, @CAD97. I saw that a while ago, but forgot Rust had shipped that!

I tried it putting a symlink to rust-lld that comes with Rust (since I am on a GCC based Linux), and put this in my .cargo/config file:

[target.x86_64-unknown-linux-gnu]
rustflags = ["-C", "link-arg=-fuse-ld=/path/to/symlink/ld.lld"]
linker = "clang"

It built the binaries correctly, but did not seem to make much difference in linking times.

That is about the same time frame, @lxrec, but is indeed not what I am talking about.

However, your link was still helpful, because it helped me find something very close, if not the original: an LLVM compiler plugin for a combinatoric scheduler on GitHub. Unfortunately, they seem to only support LLVM 6, which Rust took out of tree (if not completely dropped) recently.

Thanks, this describes my experience extremely well when I work with C++ instead of Javascript or Python. The slow feedback cycle is really bad for productivity and it takes a very long time to get used to not having feedback at your fingertips.

My Rust projects have so far been small, though I also find myself waiting for the compiler there more often than I would like.

I wish we could have an interpreted version of Rust that would run programs maybe 50 times slower than the compiled version — but with 100 times better latency.

I see Miri exists, but since it's an interpreter for MIR, I don't know if it gives the kind of speed up I'm after?

1 Like