Why is Rust answer precise compared to C for simple (int cast to float) division

The C code

#include <stdio.h>

#define FLOAT_DIV(a, b) ((float) a / (float) b)

int main()
    printf("%f", FLOAT_DIV(10023, 100));

    return 0;



Rust code:

pub fn main() {
    println!("{:?}", (10023 as f64) / (100 as f64));

Rust is more precise where as the one from C is not. Why?

Try using double instead of float or f32 instead of f64 - then the results should be identical.


Thanks! Tried your suggestion. Rust is consistent between f32 and f64 and returns 100.23 in both cases. C returns 100.230000 with double which, while technically correct, has unwanted trailing zeros.

To summarize, there is a difference irrespective of 32 or 64 bit width used:

Language 32-bit FP 64-bit FP
Rust 100.23 100.23
C 100.230003 100.23000

I suspect it's just a difference in how they choose the default formatting precision, because 100.23 can't be perfectly represented in any binary floating point. Try formatting with something like {:0.10} to see more digits in Rust.


For {:0.20}, I get this output: (playground)

f32: 100.23000335693359375000
f64: 100.23000000000000397904

The closest f32 to 100.23 is 1 × 26 × 13137347⁄8388608 = 100.23000335693359375. Rust's Debug formatter for floats tries to find a short representation that produces that float, which happens to be 100.23 (anything between 100.229999543... and 100.23000717... would be valid output). Your C library seems to be doing the simpler algorithm, which is that 9 significant decimal digits is always enough to recover the original float.

Check out https://evanw.github.io/float-toy/ to play around with what floats actually exist.

Technically, the C one is more precise while both are equally accurate.


Check out this for a very good explanation of what's involved in printing floats.

1 Like

puts on C hat

Simpler even than that. The %f format specifier always prints numbers to the 6th place after the decimal point (unless you specify a different precision), even values that are smaller than 0.000001. (This is the difference between %f and %g -- the precision of %f specifies how many digits to print after the decimal point, but the precision of %g counts actually significant figures. AFAIK Rust's format macros don't offer a way to count significant figures.)

printf(" 0.0000001 = %f\n", 0.0000001);
printf("-0.0000001 = %f\n", -0.0000001);


 0.0000001 = 0.000000
-0.0000001 = -0.000000

Also, there's no way to actually pass a float to printf -- variadic functions can't accept float arguments (or integers smaller than int) because the extra arguments are subject to the so-called "default argument promotions". In @osrust's original code, 10023.0f / 100.0f is calculated as a float, but gets widened to double before being passed to printf, so even if C were using the same min-size formatter that Rust does, it wouldn't be able to print 100.23, because it wouldn't know that the extra digits weren't meaningful. It would need to represent the closest double to 100.23f, which is 100.2300033569336.

removes C hat