Possible to make powf const?

it would be quite bad if different machines at compile-time calculated them differently,

The thing is: As we have seen: This is already reality because the compiler (or at least rustc) is already treating powf as if it is const.

And even if it wouldn't: Having architecture dependent results on run time is IMHO much worse than having them dependent on compile time. Think about network communication...
Still fast math is much more important. So we accepted that.

It sounds counterintuitive for me because my mental model of const fn is that the difference between executing a function that could be const is be the same as the compile-time execution of the same function marked const, just compile-time evaluated.

Ooh. I see your problem, now. There's nothing wrong with you have said, ironically enough. So I wouldn't say that your mental model is wrong, it's more of incomplete.

You are missing one extremely subtle fact that changes it, radically. Consider an example that we have already disccused:

    println!("const     {:#x}", const { c(0.0) });
    println!("non-const {:#x}", { c(0.0) });

On x86 these produce different results, on arm they produce the exact same results. But that's only if we disable optimizations!

If we enable them… oops, difference disappears. Now we have the same thing both with “forced const” and without “forced const”… your mental model works, ironically enough.

It falls apart in place that you have not even voiced and thus, apparently haven't even though about: would something evaluated by compiler at the compile-time and something evaluated by program at runtime give the same answer.

That's part of your “mental model” that you haven't voiced because it's so super-obvious-it-should-be-so that you have not even mentioned that.

But… oops. That's precisely that assumption that's wrong!

IOW: it's not the introduction of const fn that may change the result, it's introduction of anything that can be calculated at compile time.

There are certain different rules for const-time calculation results and runtime calculation results. Because results of const-time calculations are fixed and results of runtime calculations are… also fixed, actually… but different on different hardware.

And that difference is the key. It's external to the Rust. it's part of the problem: we couldn't simply go and demands that half of the world (whether it's arm part of x86 part) should throw away their hardware to work with Rust. And now we have a problem with no happy resolution:

  1. We may declare that const evaluation is fully-defined and identical for all platforms — but then we would have difference between const and non-const evaluation results on some platforms.
  2. We may say that const evaluations should match what the target hardware is doing — but this would not only make it much harder to write a compiler[1], but, more importantly, the meaning of you program would depend on the target… because results of const evaluations can be used with generics that would mean that the same program with zero cfg items and textually identical may call one function on arm and radically different function on x86.

Rust have picked #1 choice there but that created a dilemma for it: how could it guarantee that, now? If compiler would simply call library-provided version of some function direction… it would violate that “const evaluation doesn't depend on the platform” rule. It needs special versions of all const functions that would be stable, independent from the platform.


  1. One would have to know “what the hardware is doing” and that's not always specified… there are even examples where “AMD hardware” does one thing and “Intel/VIA hardware” does some other things. ↩︎

0x1.0p53 is easier to read and more clearly expresses intent than 9007199254740992.

I do know that the compiler usually tries to calculate as much at compile time as possible, with const (fn) just being a kind of a guarantee for that, my mental model was wrong in that I did not that the compiler to a certain degree “ignores“ that the compile time value differs from the same runtime calculated value because the compiler optimizations should not make a difference in terms of program output/behavior (broadly spoken). Thanks for your long answer, now that’s a bit clearer.