The following code yields different results on Linux and macOS:
fn main() {
let rotation = 0.698_131_700_797_731_8_f64;
let rotation_cos = rotation.cos();
dbg!(rotation_cos);
}
I got the same results with C++ (clang):
#include <iostream>
#include <iomanip>
#include <cmath>
using namespace std;
auto main() -> int {
auto constexpr ROTATION = 0.6981317007977318;
auto const rotation_cos = cos(ROTATION);
cout << setprecision(16) << ROTATION << endl;
cout << setprecision(16) << rotation_cos << endl;
return 0;
}
On Linux (in the Rust Playground) this outputs:
[src/main.rs:33] rotation_cos = 0.766044443118978
On macOS this outputs:
[src/main.rs:33] rotation_cos = 0.7660444431189781
According to this post this might be due to hardware instructions for cos
that are available on some platforms:
Your target hardware can have instructions computing "non-trivial" expressions beyond ab or a+b, such as a+=bc or sin(x). The precision of the intermediate result bc in a+=bc may be higher than the size of an FP register would allow, had that result been actually stored in a register. …, because it happens in some build modes but not others, and across platforms the availability of these instruction varies, as does their precision.
Is there something I can do to get the same results on both platforms?