Algebraic floating point functions were recently added to increase performance by allowing compiler optimizations like re-association, which in turn enables better auto-vectorization of code. The documentation notes the following
Because of the unpredictable nature of compiler optimizations, the same inputs may produce different results even within a single program run.
And miri in turn emulates this unpredictable nature by applying a small random error to such operations. This can be seen in the playground:
#![feature(float_algebraic)]
pub fn main() {
dbg!(f64::algebraic_add(1.0, 2.0));
}
[src/main.rs:4:5] f64::algebraic_add(1.0, 2.0) = 2.9999999999999982
I'm wondering whether the goal here is really to be unpredictable even for simple operations like addition / substraction, with operands and results that are exactly defined by the floating point standards. In my opinion, these methods should allow optimizations like reordering of operations, fusing multiplies with adds or using reciprocals, but allowing random errors on exact inputs does not seem to be algebraic at all.