How do you write a code example for a function that returns f32?

It seems like a poor approach to me. An example with an output that later changes in an update would be fine. But an assert in documentation seems like it is demonstrating a property that users can rely on, a guarantee.

Using error margins that are very loose rather than too strict seems like the way to go for such asserts in doc-tests.

1 Like

See chapter 11 ("Reproducible floating-point results") of IEEE 754-2008 for full details. The short answer is that for addition, subtraction, multiplication, division (and a few other operations) on f32 and f64, you'll get reproducible results on all hardware, subject to rounding mode and a few other details.

Additionally, if your IEEE 754 implementation exposes the "inexact" status flag/exception to you in some form, then all results that don't raise "inexact" are reproducible on implementations that don't raise "inexact" for that computation. Note, however, that some reproducible results will also raise "inexact" - for example, 1.0 / 3.0 should raise inexact (since there is no exact value in fixed-size binary floating point), but is defined as a reproducible operation, and that it's permissible to implement functions like sinPi such that they always raise "inexact", even when computing values that are actually exact (e.g. sinPi(0.5) has an exact value of 1.0 by definition, but an implementation is allowed to nonetheless raise "inexact" for this computation).

Unless you've read and understood IEEE 754, though, these rules are relatively arcane and hard to get your head around - it is simpler (and often safer) to act as-if IEEE 754 never guarantees reproducibility, not least because you then get things right when you're dealing with real-world inputs (where you're as likely to get 0.499996185302734375 as 0.5 for your quarter-turn because of analogue imprecision).

Indeed, and in that case your documentation asserts should also reflect that. If the API is such that a function can return anything between 1000 and 1005, you should write:

assert!(x >= 1000 && x <= 1005);

rather than

assert!(x == 1003); // that's what it happens to currently return

I was talking about an algorithm changing (in the library or in a dependency), rather than the CPU changing.

If the code doesn't change then you'll get the same results, but the point is that any of the code may change without breaking semver compatibility, and if the code changes you will likely start getting slightly different results, in which case you don't want all your asserts to start failing. f64::sin may start giving slightly different results, matrix multiplication may start giving slightly different results, changing (a + b) + c to a + (b + c) will start giving slightly different results. You usually don't want to tie yourself to a specific 100% exact result in floating point.