`f32:round() as i64` guarantees

Coming from Kotlin we have roundToInt and roundToLong there. Those functions are self-explanatory and easy to understand what they do.

But in Rust I have many questions. All comments online just suggest to use {float}.round() as {int} but... What are the guarantees of the round function itself? Unlike in Kotlin I still get float value in return which might have precision errors.

This is what docs have to say:

Returns the nearest integer to self. If a value is half-way between two integers, round away from 0.0.

But what does "nearest" actually mean?

Can there be a case that:

  • there is some float value (xxx).9
  • two nearest rounded representable values are (xxx).99999 and (xxx+1).00002 (in theory can happen for big values like >2^24 for f32 or >2^53 for f64)
  • round chooses (xxx).99999 because it is technically closer to (xxx+1)
  • casting to int will just tuncates the value back to (xxx)

In theory what should be done is:

fn round(value: f32) -> i64 {
    if value > 0.0 {
        (value + 0.5) as i64
    } else {
        (value - 0.5) as i64
    }
}

Or am I just missing something here and this in fact is guaranteed to never happen?

IDK. Maybe I'm missing something, instead:

What does that mean and how may that happen? For “big values” anything with decimals is unrepesentable in f32 of f64. After 16777216 you would have 16777218 and 16777217 would be skipped, after 33554432 you have 33554436 and so on. All possible “big numbers” are integers thus there are no ambiguity. The only question is what happenes when number is so big it's not representable as i32 or i64 at all. But Rust nicely sidesteps that issue by returning f32 or f64.

What numbers are you talking about and how they are relevant to the question?

1 Like

It's never the case that a decimal number is representable but not the floored or ceiled version, so the second point is never true.

3 Likes

Writing floats as decimals is obfuscating what's going on here, and working in binary will help explain why this isn't possible.

First, we are only interested in normalized floats; denormals always round to one of the zero values, so we don't care about them here.

While the standard representation of floats deals with them as ± 1.mantissa * 2exp, there is an equivalent representation of the form ±integer * 2exp - offset. This representation is a simple shift of the 1.mantissa value and corresponding change to the exponent to match, and is lossless as a result.

For a number in this representation to have non-zero digits after the point (such as being (xxx).9), (exp - offset) must be negative; if it's positive or zero, then we're multiplying two integers to get the value of the float, and thus cannot have a fractional part. But the only cases where we cannot represent the next greater and next lesser integer value precisely happen when (exp - offset) is a positive integer; this is the case where we're multiplying the integer we have by 2 or more, and thus skip integer values. Between these two statements, it should be obvious why any fractional part can be precisely rounded.

3 Likes

Thanks. Your explanation is quite thorough.
BTW who chooses how to represent floats - rust or hardware?

1 Like

I don't know what you mean by representing floats, but floats are standardized and most platforms have hardware support for floating point operations (exceptions can be found in the embedded sector). Other than that rust's primitive floating point types are build around the llvm types for f32 and f64 AFAIK.

1 Like

Floating point representation is dictated by hardware. Which generally all use IEEE standard floating point numbers these days.

Hardware dictates what we can use, but all hardware that matters is compliant with IEEE 754 representations for floating point numbers, and Rust assumes IEEE 754 operations + representations.

If this is an area that interests you, you'll want to look for suitable learning materials on the topic of "numerical analysis" with reference to IEEE 754; one of the goals of IEEE 754 is that all IEEE 754 operations are amenable to numerical analysis to let you determine the error bounds of every algorithm that uses those operations; what I did in my explanation was a very simple form of numerical analysis just looking at round and determining its error bounds, but you can apply the field of numerical analysis to anything that uses floats to determine the worst case error.

5 Likes

I do think it would be nice to have a to_nearest_int or similar on floats that actually gives an integer directly, so people don't need to think about this stuff. Especially since round is kinda bad -- round_ties_even is the better-behaved method.

(Today there's as and https://doc.rust-lang.org/std/primitive.f32.html#method.to_int_unchecked, but those aren't what I want here.)

4 Likes