Losing bits on floats

There was already topics about FPU in the forum, but don't explain this.

based on the gdb's command info float the float precision is 80bits because the x86 64bits FPU is f80 but in rust max float is f64, in c for instance there is long double

pub fn main() {
        assert_eq!(3.141592653589793239, // 80bit
                   3.141592653589793);   // 64bit
        println!("true");
}
rustc test.rs
./test
true

I think there is no pure rust solution, using rug::Float; would be binded c
Is it possible to use the x86-64 FPU's float?

1 Like

I don't think there is much support for x87 instructions that don't truncate the extended floating point back to single or double precision beyond inline assembly at the moment. I could be wrong though. I found an open RFC PR for f80:

1 Like

Can you tell me if the following statement is correct?

On x86_64, the FPU isn't used—SSE2/AVX are used instead, which are strictly 64-bit.

I was not aware of that, and have some doubts.

[EDIT]

Actually, after some Google and StackOverflow search, I get the feeling that 80-bit float is not that much relevant any more on modern 64-bit x86 hardware. And most architectures other than x86 do not support 80-bit float.

I'm not so doubtful, but I don't know enough about rustc and LLVM to give you a definitive answer. But SSE2 is over twenty years old. I don't think there are many processors (probably none in use today) that support x87 and not SSE2.

Actually, I wonder for what they require 80-bit support, when 80-bit support is so restricted on modern hardware. Perhaps to be compatible with legacy C-software compiled for 32-bit x86 systems which actually use the 80-bit FPU instructions internally? I once desired 80-bit support myself, that was for the in-circle test needed for an delaunay-triangulation algorithm, which has high accuracy requirement. But actually 80-bit was not really needed.

Yes, I also assume the RFC was created to allow compatibility with C's long double on x86 targets.

1 Like

Rust has no tier 1 targets left with a legacy x87 FPU. SSE2 is the minimum now, and it does not support any scalar type wider than 64 bits. x86-64 CPUs certainly still support the whole x87 shebang for backwards compatibility, including fp80, but I'd bet it's a slow microcode-level emulation.

3 Likes

My personal use-case is implementing an fpu emulation in mwemu, with no inline asm or unsafe block, but probably compared with gdb results will lose precision.

The emulated stuff are shellcodes, packers etc, that actually uses fpu, they don't use normal compilers or even implement parts in pure asm.

You need it to calculate without loss of precision. Canonical way to do that is via using e⁽ʸ ˡⁿ⁽ˣ⁾⁾, but if one does want to have full 64 bits of precision in the output representation with 64 bit mantissa is needed. And that representation is fp80.

It wasn't picked up to be precisely fp80 just on a whim…

Nope. They don't get as much love as SSE/AVX pipelines, but even Zen4 doesn't dare to make FMUL slower than two instructions per CPU clock. And Ice Lake still at one instruction per CPU clock.

Compare to PEXT/PDEP that may take literally hundreds of cycles, because they are microcoded on some CPUs.

1 Like

Hello,

I implemented f80 emulation in rust, with some conversion methods.

it passes these tests:

regards.