For those that use access to M1 and Xeon, is the M1 faster than Server grade Xeon at Rust compile times ? Not looking to start a flame war, just wondering if there is a way to shave 50% off of compile time.
If the only objective is a shorter compile time, then you are looking for the best raw CPU performance per cost ratio. In that sense, a desktop PC is the clear winner.
The reason is that notebook CPUs have a lower power and thermal thresholds, which drastically limits the overall performance when they are compared with CPUs that can have bigger and better forms of cooling such as active air cooling or liquid cooling.
For a more objective way of comparing CPUs, benchmarks are used. Here's a website that ranks all CPUs by their raw performance:
You can see that 10-Core Apple's M1 Pro gets a score of 23,652 points (while the M1 Max is just below and the 8-core part gets 15,121) in the benchmark.
A modern and decent CPU is AMD's Ryzen 5950X, which currently costs around $700 USD and has a performance of almost twice that of the 10-Core M1, with 46,180 points.
We could go even higher in the chart, but the 5950X is where the best bang per buck is for desktop PCs. Workstation grade (like AMD's Threadripper) and server grade CPUs (Like Intel Xeon or AMD Epyc) are in a whole different league, with AMD's EPYC 7443P getting a score of 58,047 points and a suggested retail price of 1337 USD.
I guess you get the point. If you only care about raw performance, notebook CPUs do a very poorly work.
A minor observation : my laptop seems to compile (and run Rust programs) a great deal faster if I run with the power cable plugged in rather than on battery (and this is not just when the battery is low). Sorry I cannot help with the actual question.
That's good to point out, as it can be easily forgotten when in the heat of designing/writing the code on laptops...
OS power schema and related underclocking can be changed in OS (called e.g ."power mode" in win10) to prevent this power saving for max. performance, at the cost of decreased battery life when not connected to grid of course.
Sorry, are you assuming here that PassMark work load is representative of Rust compile load? I neither have evidence to support this nor evidence to refute this. Genuinely curious as there are anecdotes of people switching to M1 and getting faster compile times. (It is entirely possible they were not using a desktop/server at the same price point as the M1).
I'm not assuming, that's what CPU benchmarks are meant for: measure CPU performance. Afaik, compilation time benefits a lot from having multiple CPU cores, but just to some extent. At some tasks, the single thread performance is what matters.
As for the rumors that you mentioned, unless you have some evidence, to me it sounds like magical thinking. A CPU won't perform miraculously different than in the industry-standard benchmarks just because you close your eyes and wish strongly for it.
I would expect disk performance to be a factor as well which wouldn't get captured by CPU benchmarks. I would think it's a bigger problem for spinning hard drives where random disk access times are very poor compared to SSD's and NVMe's though there's also quite a bit of variance on disk speeds amont SSD's and NVMe's. In that regard, I believe the M1 macs have very good disk speeds which might end up benefitting certain build scenarios. Another factor in the same vein is available RAM for disk caching which could bypass some of the disk access altogether. As with everything benchmark-related, I think the real answer is it's complicated. But if I had to guess, assuming you're on a modern platform with NVMe's for disks, I would think the CPU benchmarks would be a good proxy for compilation time.
As someone who spent a decade of my career designing machines to do well on such benchmarks, I'm skeptical of their value. We would very often add/optimize an instruction to get a better number on some industry standard benchmark whether or not that change appreciably affected performance on real applications. For example, FMIN/FMAX made it into the Itanium because they boosted the SPEC performance by 6%. Performance of real applications depends more on cache behavior, memory bandwidth, and I/O speeds.
That's why, in the other thread, I used a benchmark that's clang. If you're looking at a release build that spends substantial time in LLVM, that should be a reasonably representative workload -- it's also reading a bunch of files, writing a bunch of temporary ones, running a linker, etc.