How do I implement the JS 'Math.hypot()' function in Rust?

I saw this:

Function js_sys::Math::hypot
pub fn hypot(x: f64, y: f64) -> f64

But how do I use this? I can't use use Function js_sys::Math::hypot to make use of it, and copy pasting it doesn't work either.

Alternatively, is there a Rust function like this already available?

https://docs.rs/js-sys/0.3.44/js_sys/Math/fn.hypot.html
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Math/hypot

" The Math.hypot() function returns the square root of the sum of squares of its arguments."

so you needs something like

(x * x + y * y).sqrt()

js_sys is a bindings crate, it doesn't appear to supply implementations

Is there a reason the hypot method on the floating point primitives won't work for you?

5 Likes

It could be implemented like

pub trait Hypot<T> {
    fn hypot(self, b: T) -> f64;
}

impl<T> Hypot<T> for T
where T: core::ops::Mul<T, Output = T> + core::ops::Add<T, Output = T> + core::convert::Into<f64> + Copy
{
    fn hypot(self, b: T) -> f64 {
        ((self * self + b * b).into()).sqrt()
    }
}

or easier

pub fn hypot<T>(a: T, b: T) -> f64
where T: core::ops::Mul<T, Output = T> + core::ops::Add<T, Output = T> + core::convert::Into<f64> + Copy
{
    ((a * a + b * b).into()).sqrt()
}

https://doc.rust-lang.org/std/primitive.f64.html#method.hypot is, as mentioned, what you want.

But for extra context, it's not quite as easy as

because of potential overflow cases.

[src/main.rs:4] x.hypot(y) = 3.6055512754639894e200
[src/main.rs:5] (x * x + y * y).sqrt() = inf

For more, see

13 Likes

The geometric is something else entirely:

fn geometric_mean(x: f32, y: f32) -> {
    (x*y).sqrt()
}

It's useful when you want to "average" things that have different units, so addition is undefined.

Classic example is if you want to average the runtimes of different benchmarks to get a measure of performance (e.g. of CPUs), scaled relative to some reference to give a dimensionless number. If you use an ordinary mean, the choice of reference can change which is fastest, while the geometric mean gives the same relative ordering regardless of which reference is chosen.

4 Likes

Oh yeah, thank you. I forgot some formulas :sweat_smile:

Annoyingly, I wanted to dig up what hypot actually did, eg if it's a CPU operation implementing IEE754 directly, like some methods are, but it just calls into libc, which for me pretty much immediately just hits the wall of vcruntime. Gross.

But there is a nice implementation in boost math that boils down to:

fn hypot(x, y) {
  // ... some cleanup, then:
  let rat = y / x;
  x * sqrt(1 + rat*rat)
}

Looks to me like, eg. game uses should still do the dumb thing, it's likely to be a bit faster and you're not going to hit overflow cases.

1 Like

The article @scottmcm linked to describes the exact same implementation.

1 Like

Another way to do this would be to subtract k from both exponents to bring the larger one close to 0, then compute sqrt(a*a + b*b), and finally add k to the exponent of the result.

The most expensive part is computing sqrt, which the naive algorithm doesn't avoid, so this doesn't seem like a worthwhile optimization given that it sacrifices half of the range of the exponent.

1 Like

Well I mean more that this is a function for increased correctness, it's not a "go fast" button you should go use in your vec because "it's written by smart people"

I would assume the expensive part would actually be branch misprediction in the cleanup (assuming the compiler can't do some magic), and the sqrt can both be skipped in many cases (comparing the square of the hypotenuse) and the has CPU native implementations that seem quite performant - depending on your precision needs.

It's not clear to me that the halved exponent matters in most cases? Graphics generally cares more about precision than range, and logic doesn't really care about either. You have to be doing something pretty odd to need more than 10¹⁹ units of distance with precision of 1/10⁶ units. But at least something to keep in mind if you're doing an open world.

1 Like

Coincidentally this is for a game. So you're saying the written out equation would be better?

"better" only makes sense in the context of a purpose: I doubt you'd notice a difference in the vast majority of cases for performance. If you see this pop up in a profile, give the dumb version a try, sure, but really you should be looking at dedicated vector math libraries that have already micro-optimized SIMD code.

Otherwise it's mostly stylistic preference and awareness of what the technical differences are, in my opinion.

1 Like

How would a librarie optimize code like this than? I can't imagine how that would work when it's a short formula? (I don't know what's possible.) Or do you mean in general for other functions you could use as well?

I need to continuously calculate the hypotenuse, and possibly other stuff for a lot of objects every few ms. So I might need libraries like that. Or at least, be nice to look at what they have to offer.

Maybe I should look into SIMD code, since I don't know what that is.

SIMD stands for "Single-Instruction Multiple Dispatch," and is the generic term for CPU instructions that let you do the same operation on several different memory locations at the same time. This is more efficient in terms of silicon/microcode because the instruction decoding only has to happen once.

As math-heavy code tends to perform the same calculations on lots of data points, this can provide a significant speedup in common cases.

2 Likes

Know any libraries? I'm trying to avoid any as much as I can. But helps to look at code as well.

Rust has support for SIMD in the standard library

2 Likes

That works x)

You don't necessarily need to write explicit SIMD code for simple cases. The optimizer is capable of recognizing that repeated arithmetic (such as adding the four components of two vectors to each other) can be processed using SIMD instructions.

Consult the generated assembly for straightforward code before complicating it.

2 Likes

This is usually expanded as Multiple Data, which I think is more intuitive about what it's doing.

3 Likes