Will Rust's trigonometry be stabilized in the future or is it going to be stuck in a non-const-fn state, because it might be platform dependent? Are there any crates for doing trigonometry in constants or statics? Is it worth using them instead of using a lazy-static
?
The trig related functions are not const
because they deal with floating point numbers (f32
, f64
), which can't be used in const fn
.
So you'll have to wait for const_fn_floating_point_arithmetic
at the very least.
Oh, interesting! Thanks for sharing! Guess I'll move to using lazy-static
after all!
There should probably be a lazy_static
bot on this forum, but you should check out once_cell
The awkward thing about trigonometry is that (as far as I've heard, at least) it's not yet known whether it's feasible to implement it with tolerable performance to the ½ULP accuracy that's needed in order to get consistent results. It's doable for things like +-*/
, which are why those are portable and might plausibly be const
able one day (if we can figure out how to deal with NAN), and maybe even could happen for relatively-simple well-behaved monotonic functions like pow2(x)
.
But something like sin(10300) is just fundamentally hard to get right. Just think how many decimal places of accuracy you need on π for the range reduction...
Only about 1000 bits of 1/π. You don't even need bignums for this, because you only need the product x * (1/π) modulo 2, so the higher-order bits of 1/π can be ignored when multiplying.
Still, that won't give you 1/2 ULP for sine and cosine, that's hard, but you can get something a bit larger like 1 ULP, with some extra care for special cases that are close to a multiple of π.
It could in theory be consistent in const context if you prescribe the exact 1 ULP algorithm to use in const context, even though it's not 1/2 ULP.
Absolutely, but I think we also want it to be consistent with what happens at runtime, and (understandably, IMHO) Rust seems to not want to have to make and ship its own math library for all platforms.
Getting 1/2 ULP in general is really hard. For square root and division, you need 2n+1 bits to round correctly to n bits. That's not true for intrinsics; I believe there's no limit to how many bits you need. (I haven't read Kahan's papers on the subject in a looong time.)
Is it practical to stay consistent with runtime when we already have different results on different platforms? If we get a more accurate result with const computation than during runtime and the result is the same on every platform, personally I'd be happy about it.
There is certainly a limit, because the number of possible inputs is finite. Moreover, the cases that are right on the threshold get exponentially rarer with increased precision, so the worst-case precision required is not going to be gigantic. Not sure how hard it is to calculate what the worst case is, but there is a bound.
So, if you're willing to go with slow bignum computations in special cases that require extra precision, exact rounding (1/2 ULP) is possible to implement - just increase precision if you're still not sure which way to round. But it requires a special somewhat complicated math library for that.
Actually I'm starting to work on a project that will do exactly that: calculate real number functions to arbitrary precision exactly.
If I recall correctly, you can always get 1 ULP precision on intrinsics with a reasonable amount of extra computation.
I think the problem is that you'd get hard to track down bugs that are optimization dependent. I'd hate to have turning off the optimizer cause my bugs to vanish, as it would make debugging way more of a pain.
That said, I long for more const functions and floating point functions in particular.
This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.