'-mfloat-abi=hard' for target 'thumbv7em-none-eabihf'?

Hello :crab:,

I ported the MiBench::basicmath benchmark to Rust (link to my Rust port), and it takes 2.7x more execution time compared to the original C program that is compiled with -mfloat-abi=hard

  • board
    Both versions were tested on NucleoF429ZI board
  • build target
  • The rust port uses the Rust libm crate to call math functions.

I think the performance gap is happening because my Rust program is not properly using the FPU.

Is there a config option equivalent to -mfloat-abi=hard in Rust?? Or is this something that needs to be implemented in the compiler?

  • stm32cubeide provides 3 config options for floating point calculations as below
    • -mfloat-abi=soft
    • -mfloat-abi=softfp
    • -mfloat-abi=hard

According to this comment on GitHub issue, it seems that at least FPU is enabled by default..

Thank you for checking out this post :sun_with_face:

-eabihf targets use the hardfloat ABI already.

The Cortex-M4F FPU is only a single-precision FPU, so use of f64 will probably end up using software floating point. The same would happen for double in C (I don't think that's allowed to be a 32-bit float?), so you're likely comparing performance of software floating point libraries here.

1 Like

Thank you so much for your feedback! :sun_with_face:
If I'm already comparing the performance of software floating point libraries,
I guess I should look into the libm crate implementation in more detail.

You were right! I re-compiled the C benchmark with -mfloat-abi=soft, and still got the exact same binary size. The execution times also stayed at the same level.

(I don't think that's allowed to be a 32-bit float?)

On the Arduino platform (at least AVR based units) doubles are 32bit.

Thank you for mentioning that!
I did a quick check with my NucleoF429ZI board with the code below

printf("%d\n\n", sizeof(double)); // Send ITM packet

and at least for my board the output is 8 (bytes).

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.