Why the result of f32 type floating point operation does not meet the expectation

This is my code:

struct Point {
    x: f32,
    y: f32,
}
struct Rectangle {
    top_left: Point,
    bottom_right: Point,
}

fn rect_area(input : Rectangle) ->f32{
  let length=if input.top_left.y-input.bottom_right.y >=0.0 {input.top_left.y-input.bottom_right.y} else {input.bottom_right.y -input.top_left.y};
  let x_len=if input.top_left.x-input.bottom_right.x >=0.0 {input.top_left.x-input.bottom_right.x } else {input.bottom_right.x -input.top_left.x};
  length*x_len
}

fn square(input:Point,le:f32)->Rectangle{
  Rectangle{
    top_left:Point { x: (input.x), y: (input.y) },
    bottom_right:Point { x: (input.x+le), y: (input.y+le) }
  }
}

fn main() {
    let a=square(Point { x: (1.0), y: (1.0) },1.1);
    println!("{}",rect_area(a)==1.21);
}

The rect_area function returns 1.2099998, but I think it should return 1.21. I think this is a small problem, but I'd like to know why, because rust should be able to operate correctly and represent 1.21.

For example:

    println!("{}",1.1f32*1.1f32);#1.21

The result of this code is correct.

Welcome to the world of floating point accuracy problems. This has nothing to do with Rust but the fact that a 32-bit floating point number isn't very accurate. The most famous floating point accuracy error is 0.2 + 0.1:

fn main() {
    println!("{}", 0.2 + 0.1);
}

gives you 0.30000000000000004. This is completely hardware-related and due to the fact that you simply can not represent real numbers very well with 32 or 64 bits.

For one operation, the error is mostly fine, but if you do lots and lots of floating point arithmetic, these errors add up. Also the order of operations can impact the result.


Edit: Also note that you shouldn't compare floating points with ==, for the exact reason of accuracy problems. You should use the machine epsilon instead:

fn main() {
    assert!((0.2_f32 + 0.1 - 0.3).abs() <= f32::EPSILON);
}
3 Likes

Thank you for your answer! I finally realized that it was "2.1-1.0" that was causing the problem, but I didn't realize it at first.

1 Like

That's not a correct use of epsilon. f32::EPSILON is the difference between 1.0 and the next representable f32, so it's about 4 times bigger than the rounding error here. If you were dealing with numbers larger than 1.0, it would be smaller than any possible rounding error. It only appears to work here because the numbers are "relatively close" to 1.0.

So, if you want to compare for errors of an ulp, you need to normalize first. But even so, the results may be surprising. In general the correct way to compare two floating-point numbers is application dependent. There is no general comparison method that gives sensible results for every application, so it's hard to say whether you're doing it right. But if you're comparing arbitrarily sized errors against machine epsilon, you're definitely doing it wrong.

See The Floating-Point Guide - Comparison (note that machine epsilon is not magic; it's just an ulp of 1.0, so when used in this way it is just an absolute error margin like 0.00001 or any other arbitrary cutoff). (But as the text says, even the complex nearlyEqual function on that page would be wrong for many applications. You have to apply your brain to know what kind of comparison is correct, not just copy something that sounds smart from a website.)

8 Likes

I stand corrected. Indeed I took the suggested solution from the clippy lint for float_cmp: Clippy Lints as correct, without questioning it

Oof, didn't know Clippy suggests it. That seems like an oversight

If you want exact mathematics for decimals, then consider using the fraction crate.

1 Like

It's good to poke at this expectation to help you understand what's going on.

f32 is a 32-bit type, so, fundamentally, it can represent at most 2³² values exactly -- there's an uncountable number of real numbers that it cannot represent, and still a (countable) infinite number of rational numbers it can't represent either. So in general, unless a number is special you should expect that it can't represent it exactly, and should instead be prepared to get a number that's close instead. And indeed, the number you get is very close -- about ±10⁻⁷ at that magnitude, for an f32.

It's also a binary floating-point number. That means that it works by storing the number as an integer times an integer power of two. That means that even if it was infinitely large, it still couldn't store 1.21 exactly. 1.21≡121/100, or if we prime factorize it, 11² · 5⁻² · 2⁻⁴. That negative power on a factor other than two means that the number is fundamentally inexact in any binary floating-point type -- f64 will be closer, but still not exact, and even if rust had a f1024 it still wouldn't be exact.

So you basically have two options:

  1. Accept that low relative error is good enough.
  2. Switch to something different (perhaps a rational number type, perhaps a decimal floating-point type, perhaps …) that can represent 1.21 exactly.
8 Likes

If you really want to know the gory details of why it can not this is a good read: "What Every Computer Scientist Should Know About Floating-Point Arithmetic": https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html

That is probably too much technical detail for most programmers most of the time but it's good to skim through it and see what goes on with floats.

3 Likes

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.