I don't think asking people to do this manually will yield good results because so many factors play into performance. Websites that publish user-provided benchmark results usually have those generated by tools that submit context to make things comparable and ensure that benchmarks are executed in a reproducible manner (to the extent that those tools can even ensure that).
- Operating system (even the specific version)
- rust version
- cargo lockfile of the benchmarked project
- the exact build procedure
- build settings in Cargo.toml and config.toml
- warm or cold disk cache
- exactly what kind of change gets made (just
touch
ing a file vs. applying a specific modification)
- general hardware specs
- filesystem being used
- antivirus or similar monitoring/security software that might interfere with builds
- are there background processes running
- free system RAM
- libc version
- what linker is used
- power/thermal settings in the BIOS or the OS can matter
- current CPU temperature (no really, this can alter benchmark results on laptops)
- is it running on battery or AC, exact git revision
- ...
For some projects some of the factors can make minutes of difference, as Slow compile times on Windows - #15 by afetisov demonstrates
Should I focus on single-core or multi-core speeds to compare CPUs?
Depends. Profile the project that's relevant to you, see if your builds are single- or multi-threaded. It's quite possibly a mix.