How to benchmark incremental build times for Rust projects when selecting a new laptop

I'm creating a Rust compilation leaderboard to evaluate laptops for use as development workstations.

What matters most is the incremental build time - the time it takes to recompile after small code changes during active development.

Almost on every project (especially if a web server) I use cargo watch -x run and when I change even just one file the project is recompiled (in debug mode) and I wait seconds and seconds to verify the changes.

Here's what I'd like advice on:

  1. What specific steps can I ask people to take to measure these times accurately? Is a debug build sufficient to gauge incremental build performance?

  2. Which open-source Rust project would be ideal for testing? It should include generics, traits, and multiple crates and another project not overly complex.

  3. Are there benchmarking tools (e.g., Geekbench) that measure what I'm looking for, or do I need a custom solution? (I don't know if Geekbench is true and good for the data I'm looking for.)

  4. Should I focus on single-core or multi-core speeds to compare CPUs? This is useful to understand for example whether to buy a basic Apple Silicon or spend more for an advanced CPU (Pro, Max, Ultra) (which only has a greater number of cores, I think)

  5. Is there a specific script that can automate these tests from the terminal?

Any advice on accurate measurement methods and useful benchmarking practices for Rust's incremental compilation would be appreciated.

1 Like

I don't think asking people to do this manually will yield good results because so many factors play into performance. Websites that publish user-provided benchmark results usually have those generated by tools that submit context to make things comparable and ensure that benchmarks are executed in a reproducible manner (to the extent that those tools can even ensure that).

  • Operating system (even the specific version)
  • rust version
  • cargo lockfile of the benchmarked project
  • the exact build procedure
    • build settings in Cargo.toml and config.toml
    • warm or cold disk cache
    • exactly what kind of change gets made (just touching a file vs. applying a specific modification)
  • general hardware specs
  • filesystem being used
  • antivirus or similar monitoring/security software that might interfere with builds
  • are there background processes running
  • free system RAM
  • libc version
  • what linker is used
  • power/thermal settings in the BIOS or the OS can matter
    • current CPU temperature (no really, this can alter benchmark results on laptops)
    • is it running on battery or AC, exact git revision
  • ...

For some projects some of the factors can make minutes of difference, as Slow compile times on Windows - #15 by afetisov demonstrates

Should I focus on single-core or multi-core speeds to compare CPUs?

Depends. Profile the project that's relevant to you, see if your builds are single- or multi-threaded. It's quite possibly a mix.

1 Like