I have about almost two decades of experience in other programming languages but am pretty new to Rust. I love it so far and am almost done with the official book. I wanted to start tinkering with WebAssembly, so I figured I'd do a simple, silly CPU bound brute-force test in both Rust and JavaScript (implemented nearly identically) to see any performance differences. But before I went down the browser path, I figured I would just do a simple cli version of both first.
The program below is just a simple iterative discovery of the first N prime numbers (num_primes below) using a naive algorithm testing divisibility of prior primes. When I timed the executions though, the Node version came out to be about 2x as fast as the release built Rust version. I would have expected similar performance, if not Rust winning. Clearly, I'm doing something wrong with the Rust version, and hope you can spot it for me. And I know all the caveats about benchmarking trivial programs like this, but I'm generally curious what's up in this case.
use std::env;
fn main() {
let args: Vec<String> = env::args().collect();
let num_primes = args.get(1).unwrap().parse::<usize>().unwrap();
let mut primes: Vec<i64> = Vec::with_capacity(num_primes);
let mut n: i64 = 2;
while primes.len() < num_primes {
if let None = primes.iter().find(|&&p| n % p == 0) {
primes.push(n);
}
n += 1;
}
println!("Found {} primes, and the last is = {}", primes.len(), n - 1);
}
I'm guessing there are plenty of idiomatic Rust improvements, which I would love to hear (like maybe using Iterator::any instead of find), but specifically any that may be impacting performance would be appreciated!
I should also note that there is no significant timing deviations depending on which program runs first, or if they run a few times in a row. Lastly, this was tested on a Debian 10 Lenovo ThinkPad T460s Intel Core i5-6200U 16GB RAM 128GB SSD. Thanks!
The JavaScript mod (%) operator coerces its operand to a 32-bit integer. If you change your Rust code to use i32 instead of i64 then it will match the JS behavior and performance.
Although it seems I was mistaken about JavaScript. Some docs I found on the web say that % truncates its operands to 32 bits, but this is wrong: The ECMAScript standard does not actually say this, and my testing shows that it actually works on all JavaScript numbers (i.e., 64-bit floats).
(Bitwise operators like & in JavaScript do truncate to 32 bits.)
$ node
Welcome to Node.js v13.6.0.
Type ".help" for more information.
> 3.14156 % 3
0.14156000000000013
>
However it is my understanding from reading around how JS engines work now a days that they are smart enough to JIT the code using integer operations. Until such time they find that a number becomes non-integer. Then they discard that optimization.
I think in Javascript all Numbers are 64-bit floats (IEEE 754). There are no integers. But, as ZiCog says, JS engines will optimize hot code as much as they can. This presumably means sometimes turning Javascript floats in to CPU integers behind your back.
For comparison, converting the Rust version to f64 makes it slightly faster than the i64 version on my machine, but still much slower than the JS and i32 versions:
$ time node primes.js 50000
Found 50000 primes, and last is 611953
node primes.js 50000 4.70s user 0.01s system 99% cpu 4.725 total
$ time ./primes_i64 50000
Found 50000 primes, and the last is = 611953
./primes_i64 50000 7.93s user 0.01s system 97% cpu 8.129 total
$ time ./primes_f64 50000
Found 50000 primes, and the last is = 611953
./primes_f64 50000 7.23s user 0.01s system 98% cpu 7.352 total
$ time ./primes_i32 50000
Found 50000 primes, and the last is = 611953
./primes_i32 50000 2.58s user 0.00s system 95% cpu 2.709 total
Thank you for the followup clarification and tests @mbrubeck , and the additional ideas that followed from the others. I think the V8 under the covers (possibly automagic asm.js-like) float to int optimizations sounds most likely. But I am both glad to know (and slightly shocked) at the magnitude of speedup from i64 to i32 in Rust. I remember the Rust docs stating it was picked as the default int for performance reasons, even on modern 64-bit architectures, but I would not have guessed that much of a speedup was to be had. Thanks!
And happy to be a new member of this community. Thanks again!