Hi all, I'm tracking down some nasty platform-specific non-determinism in a numerics heavy application, and I've found a root cause to be a disagreement between Linux and MacOS with f64::ln_1p:

Succeeds on linux, and fails on macos (macos computes 2.8429316976499686). I've found through looking at the source and godbolt that f64::ln_1p is implemented in terms of log1p, which is provided by libm. Is this considered a rustc bug? Any pointers on what I should do next to get a consistent implementation?

No, generally speaking, we'll call into the system libm implementation. It may be considered a bug in one of them, but it's hard to say.

Any pointers on what I should do next to get a consistent implementation?

Use https://crates.io/crates/libm. This is a pure Rust implementation of libm, which we use on platforms without one. There's no (easy) way to force the stdlib to use this, but it's usable as an external crate.

Remember that for anything but the absolute basics (+-*/), you can't really expect the ±½ULP accuracy needed for portable results. (See the discussion in Constant trigonometry - #6 by scottmcm for more.) It's common for library implementations of things to only have ±1ULP accuracy, and thus for different implementations to end up with different values.

What's correct? Well, the closestf64 to 16.166017412408575 is exactly 16.166017412408574926985238562338054180145263671875. The natural logarithm of that-plus-1 is 2.8429316976499687743727279781598348781222396474891320546419869157…, the closest f64 to which is 2.842931697649968558749833391630090773105621337890625, with 2.8429316976499686 close enough to round-trip.

So it looks to me like MacOS is actually correct here!

But that closest value needed to round down. It'd be within ±1ULP to round up instead, to exactly 2.84293169764996900283904324169270694255828857421875, for which 2.842931697649969 is enough to round-trip.

And what do you know, that's what you're getting on Linux. So it's probably not something it'd consider to be a bug.

In my research group, there are several guys doing their PhDs in trying to get bit-reproducible results for floating-point computations. (Sadly, they are all working in C++.) I would say just don't bother – choose a larger epsilon for your tests and move right on.

There exist libraries for correctly-rounded floating-point arithmetic. The most prominent one is probably GNU MPFR, which can be accessed via rug::Float.

/*
[dependencies.rug]
version = "1"
default-features = false
features = ["float"]
*/
use rug::Float;
fn ln_1p_exact(x: f64) -> f64 {
Float::with_val(53, x).ln_1p().to_f64()
}
fn main() {
println!("{}", ln_1p_exact(16.166017412408575_f64));
}

This returns the correct value 2.8429316976499686. Of course, the performance is probably far less than what an inaccurate-but-portable libm can achieve.

It does appear that the suggestion to use the crate version of libm is enough to get reproducibility across linux and macos, at least in my tests. For my usecase I'm more concerned about reproducibility than ultimate precision. It seems like I may be missing something, though - are there architecture specific concerns in play here, or some other reason this is a research-level endeavor?

I see, thanks. I'm fortunate to have a usecase that is much smaller scope than the Gaffer link you posted, we only have a handful of platforms we need to support deterministically. Thanks all!