Possible to make powf const?

Is there a proper way to do this?

//const POWER: f64 = 1.321929_f64;
const POWER250: f64 = 3.35775061812362868504_f64; //  2.5^1.321929

I don't like magic values in my code. But I also don't like calling a function every time to get a fixed value.

Have you checked assembly produced by compiler, after enabling optimizations? Inlining and constant folding are one of the most basic optimizations compilers can do. Most likely compiler will already turn call to powf into a constant.

EDIT. If you take a look at this godbolt example, you will see that even enabling first optimization level completely gets rid of calls to f64::powf and just moves precomputed constant into register.

So as usuall: Do the simple thing, trust the compiler, measure, and only then try to optimize.

1 Like

So you want me to reimplement powf?!

I think one of the reasons for this is that powf is architecture-dependent

1 Like

With the recent const function for float, rust seams to consider that it's fine if the result at compile time is a little different than at build time.

Also f64::sqrt has guaranteed behavior but it isn't const.

There are certain places where you need const expressions, e.g. the length of an array or const generics, or static initializers. It seems kind of niche to need a const f64 in these settings.

You could pre-calculate the magic value with build.rs using the regular powf

that sounds counterintuitive for me, can you link a document/issue/etc?

A few links related to that subject :

With more time I may be able to find a more precise quote if you need it.

1 Like

First and foremost: It is not so much about runtime. Given the amount of string handling I do in my code, I doubt that a few additional double operations will make a difference in runtime. It is about readability: a constant should be a constant, not a function.

Second: Even though the compiler seems to know that powf(const, const) is const for optimization purposes, it still won't allow you to use it in a const/static function because it isn't explicitly const. I'm new to rust and have similar problems with mutability and structs/collections. In the end more or less everything is mutable (like in most other languages) defeating safety feature of having immutability by default.

Third: OK, I'm not an assembly expert at all, but I played a around with the compiler and made a few "disturbing" discoveries:

  • inline(never) still results in inlining even on -O 1. It's documented so it is not a bug…
  • Even a Rust version of /bin/true is operating-systems sized, even with -C strip=debuginfo.
  • While the named optimization is always there: --emit asm and --crate-type bin seem to differ slightly...
1 Like

There is an earlier post here:

lazy_static seems a solution.

1 Like

Why is it counterintuitive? Evaluation of const happens in one environment, evaluation of non-const happens in a different one… these are not guaranteed to match.

Usually trouble is caused by NaN, but other differences are possible, too.

Floats are finicky.

1 Like

Thanks for doing the search I wasn't able to do.

The core reason here is that it goes back to the https://en.wikipedia.org/wiki/Rounding#Table-maker's_dilemma. Proving that the implementation of pow is within ±½ULP -- aka perfectly accurate -- for a binary function of f64 is hard. (A unary function on f32 you can just test all of them, but 2¹²⁸ is obviously too big a state space to test exhaustively.)

(log2 and exp2 will probably be const sooner since being unary and monotonic it's much easier to get them correct. So once that happens you could calculate your power using those instead, but of course at the cost of potentially increasing the rounding errors.)

And it would be quite bad if different machines at compile-time calculated them differently, especially for anything feeding into the type system. Which absolutely can happen for certain things, see Math result varies in debug vs. release builds

So until rust ships its own known-accurate math library -- instead of relying on the platform -- or makes some other concession, it's not practical to make it const. There are lots of ±1ULP math libraries, but that's not enough to guarantee the same answer.


BTW, I'd really just suggest that you paste in the correct answer even if it is const, because that way you can avoid double-rounding. Like how std::f64::consts::PI.sin() = 1.2246467991473532e-16 -- not zero! -- if something's really a constant you might as well calculate it in something like wolfram alpha that can give more-than-f64 precision and then round that only once to f64.

8 Likes

It's a bit worse than that, actually. If we have an arbitrary function then any finite precision doesn't guarantee correct answer.

To provide that we need to use properties of function in question. exp is probably doable, while I'm not sure about something like 1/x or tan… although they are monotonic and we know how they grow thus it may still be possible.

Floats are hard.

If you paste the result in decimal, you're still subject to double rounding in the decimal->binary conversion. This calls for hex float literals.

1 Like

While I agree this is hypothetically possible, I've never actually seen it come up so long as you paste in the way-too-many-digits version. The odds that

3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679821480865132823066470938446

happens to have been rounded to a different float bucket than real π is incredibly tiny -- especially so for irrational numbers.

Have you ever hit a case where you get the result from something like WA and it's actually right on the edge of a float boundary?

It's unlikely, but a provably correctly rounded number would be preferable to probably correctly rounded.

I think you can get provably correct if you paste something like 60 decimal digits (if the number is not very large or very small), but that's a lot of digits, and it's not obvious how many you need. It would be nice to just be able to paste exactly the 14 required in hex (for f64), which also works for very large or very small numbers with an exponent.

Hex literals are just more readable for some constants as well.

1 Like

I can do hex to float in my head.

Oh yeah, I can parse hex to float in an instant in my head.

I hope they use both. Hex for the literal and decimal in a comment so it is "readable".