Then I make a vector with the type of Point. So what I now want to do is to add functionality so I can find minimum values in this vector with type Point, such that I can find minimum of first coordinate, second coordinate and third coordinate. Usually one would be able to do something like vec.iter().min(), but this won't work in my case when it cannot deduce how to compare etc. - and I can't figure out how to let it know how to.

Googled quite a bit and found someone online mentioning this possibility:

let minx = pvec.iter().fold(0.0f32, |min, &val| if val.0 < min{ val.0 } else{ min });
let maxx = pvec.iter().fold(0.0f32, |max, &val| if val.0 > max{ val.0 } else{ max });
let minz = pvec.iter().fold(0.0f32, |min, &val| if val.2 < min{ val.2 } else{ min });
let maxz = pvec.iter().fold(0.0f32, |max, &val| if val.2 > max{ val.2 } else{ max });

Which works fine for my purposes - just wanted to share if someone else was struggling.

Primitive types have a min function (as well as a max): f32 - Rust

So you could rewrite those closures as:

let minx = pvec.iter().fold(0.0f32, |min_val, &val| val.min(min_val));

Edit: Two points -- for the fold of your min value, you want to initialize it with a value LARGER than your largest potential value. Unless you expect all the values to be negative, as written the result will always be 0.0.

Also -- f32::min takes (f32, f32). If it took (f32, &f32), you could just do:

let minx = pvec.iter().fold(99.0f32, f32::min);

I'll have to ponder this... I wonder whether there are any standard adapters that can do that conversion for you.

let minx = pvec.iter().map(|p| p.x).min().unwrap_or(0.0);

Edit: oops, that doesn't work with floats, since they're not Ord, but a custom min_by could work.

If your vector is larger than the CPU cache, it may be a performance benefit to compute all 4 (min|max)(x|y) in a single pass, by folding into a larger aggregate of them all.

Hi, if you want to find a minimum Point in a Vec<Point> you should compare Point's with his norm or something other metric and for that you should impl std::cmp::PartialEq and std::cmp::PartialOrd. Something like this:

use std::cmp::{PartialEq, Ordering};
use std::f32;
#[derive(Debug, Copy, Clone)]
pub struct Point(f32, f32, f32);
impl Point {
fn norm2(&self) -> f32 {
f32::sqrt(self.0 * self.0 + self.1 * self.1 + self.2 * self.2)
}
}
pub fn compare_floats(num1: f32, num2: f32) -> bool {
f32::abs(num1 - num2) < f32::EPSILON
}
impl PartialEq for Point {
fn eq(&self, other: &Self) -> bool {
compare_floats(self.0, other.0) && compare_floats(self.1, other.1) && compare_floats(self.2, other.2)
}
}
impl PartialOrd for Point {
fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
if self == other {Some(Ordering::Equal)}
else if self.norm2() >= other.norm2() {Some(Ordering::Greater)}
else if self.norm2() <= other.norm2() {Some(Ordering::Less)}
else {None}
}
}
#[derive(Copy, Clone, Debug)]
struct MaxMin<T> {
max: T,
min: T,
}
fn find_max_min<T: std::cmp::PartialOrd + Copy>(slice: &[T]) -> MaxMin<T> {
let mut max = &slice[0];
let mut min = &slice[0];
for index in 0..slice.len() {
if slice[index] < *min {
min = &slice[index];
}
if slice[index] > *max {
max = &slice[index];
}
}
MaxMin {
max: *max,
min: *min,
}
}
fn main() {
let p1 = Point(3.0, 37.1, 0.0);
let p2 = Point(3.0, 37.0, 0.0);
let p3 = Point(3.0, 12.0, 123.0);
let mut v: Vec<Point> = Vec::new();
v.push(p1);
v.push(p2);
v.push(p3);
let e = find_max_min(&v);
println!("min: {:?}", e);
}

Have you looked at the PartialOrd and Ord traits? This sounds almost exactly like what you want, plus when #[derive(PartialOrd)] generates a PartialOrd impl for you it'll compare in the order that fields are defined (i.e. compare the first f32, if they are equal then compare the second, and so on).

Your problem is that floats don't implement Ord because NaN != NaN, and Vec::sort() method requires all items to be comparable with each other otherwise you encounter strange results.

If a f32did implement Ord, you'd be able to use things like some_vec.sort() or some_vec.iter().min() to get the "smallest" value. However, you can create a wrapper around a f32 which implements all the right traits and bails when it encounters something nonsensical like NaN. That's effectively what the noisy_float crate does.

It really depends. You probably won't notice any real difference unless you're working with 10,000 or 100,000+ elements in a tight loop.

Of course, it's always a good idea to run benchmarks and measure the impact on your particular codebase instead of listening to some random guy on the internet

Unfortunately that's not really possible once you get past operations with hard-coded values.

The moment you introduce a divide (or %, round(), log(), or asin()) without the appropriate bounds checks on some value provided by the user, it's not possible for the optimiser to prove you'll never get a NaN.

The panic path is usually marked as #[cold] so it bloat the comparison function and prevent it from being inlined, but it tends to hinder things like autovectorisation.

I can't really tell you. When doing this sort of thing it's nice to have rules-of-thumb on what may be cheap or expensive, but the only way to be sure is to do an experiment (i.e. profile + benchmark).