Code review for a Rust beginner


#1

Hi there!

I just recently, about three days ago, started learning and trying out Rust!
My programming background is nothing to impressive, basically it’s just comprised of many years of spare-time ANSI C programming until I, two years ago, went back to school and studied game programming, learning OOP and C++.
During that time I never really saw the great benefits of OOP, especially in the context of game programming where performance is paramount, and while I would use it professionally I still kept using ANSI C for all my spare-time project because it made more sense to me.
But since I found Rust I have to say that I’m quite intrigued by the safety and concurrency features, but also Rusts way of implementing OOP:ish behavior makes a lot more sense to me, and I really like the functionality and implementation of Traits over inheritance.

So, now to my request: As I said, I’m really new to Rust, and because of my education I mainly develop game related things.
So a lot of the time I tend to prefer the most performance effective solutions over solutions that trade performance for other benefits, and I also tend to write my own basic linear algebra functionality when ever I try out a new language.
I do this because linear algebra is used a lot in game development and it also seems to have a tendency to cover a lot of different aspects of a language when you implement it, so I started implementing a simple Vec2 type today, and before I get to far into my linear algebra implementation, I thought it would be good to get some pointers from more experienced Rust user about my code this far!

What I’m mostly interested in knowing is if anything that I’m doing this far could potentially be bad for performance, maybe something I do in my code have some big performance costs that I have failed to realize, or just if anything that I’m doing is exceptional stupid to do in Rust!

Thanks in advance!

use std::ops::{Index, IndexMut, Add, Sub, Mul, AddAssign, SubAssign, MulAssign};

type V2<T> = [T; 2];

#[derive(Debug)]
pub struct Vec2<T>(V2<T>);

impl<T: Copy + Mul<Output = T>> Vec2<T> {
    pub fn new(x: T, y: T) -> Vec2<T> {
        Vec2 {
            0: ([x, y]),
        }
    }

    pub fn length_squared(&self) -> T {
        (self.0[0] * self.0[0]) * (self.0[1] * self.0[1])
    }
}

impl<T> Index<usize> for Vec2<T> {
    type Output = T;

    fn index(&self, i: usize) -> &T {
        &self.0[i]
    }
}

impl<T> IndexMut<usize> for Vec2<T> {
    fn index_mut(&mut self, i: usize) -> &mut T {
        &mut self.0[i]
    }
}

impl<T: Copy + Add<Output = T>> Add for Vec2<T> {
    type Output = Vec2<T>;

    fn add(self, right: Vec2<T>) -> Vec2<T> {
        Vec2 {
            0: ([self[0] + right[0], self[1] + right[1]]),
        }
    }
}

impl<T: Copy + Sub<Output = T>> Sub for Vec2<T> {
    type Output = Vec2<T>;

    fn sub(self, right: Vec2<T>) -> Vec2<T> {
        Vec2 {
            0: ([self[0] - right[0], self[1] - right[1]]),
        }
    }
}

impl<T: Copy + Mul<Output = T>> Mul<T> for Vec2<T> {
    type Output = Vec2<T>;

    fn mul(self, right: T) -> Vec2<T> {
        Vec2 {
            0: ([self[0] * right, self[1] * right]),
        }
    }
}

impl<T: Copy + AddAssign> AddAssign for Vec2<T> {
    fn add_assign(&mut self, right: Vec2<T>) {
        self[0] += right[0];
        self[1] += right[1];
    }
}

impl<T: Copy + SubAssign> SubAssign for Vec2<T> {
    fn sub_assign(&mut self, right: Vec2<T>) {
        self[0] -= right[0];
        self[1] -= right[1];
    }
}

impl<T: Copy + MulAssign> MulAssign<T> for Vec2<T> {
    fn mul_assign(&mut self, right: T) {
        self[0] *= right;
        self[1] *= right;
    }
}

#2

Hi! Welcome!

The code looks good too me. You’ve managed to figure out newtypes, generics and overloading — all good.

From performance perspective it should be fine. Generics are monomorphised, i.e. a specific version of your code will be created for every actual type its used with, so that everything is known and dispatched statically.

The index function may have a bounds check (since i could be greater than 1), but when the compiler inlines the function, the check may be optimized out.

To test for performance pitfalls:

  • Remember to compile with --release in Cargo! By default you get debug builds which are intentionally unoptimized.
  • You can write microbenchmarks (unstable Rust comes with a nice bencher), and compare different approaches.
  • You can check asm output in release mode (e.g. on https://play.rust-lang.org/ ), add #[no_mangle] pub extern to function to see it untouched in the asm output (otherwise it may get completely inlined and disappear!)
  • in Cargo.toml add [profile.release] debug = true and executables will work with profiling tools for C (e.g. macOS Instruments works out of the box).

For coding style issues, try clippy.


#3

Your length_squared function is not doing enough addition for my taste :slight_smile:


#4

Minor syntactical suggestion: instead of doing Vec2 { 0: ...} you can write Vec2(...).


#5

Great!
Thanks a lot for all the feedback!

My reason for doing the whole newtype thing was to ensure that my values where stored continuously on the stack, but after some thinking I figured that an ordinary struct like this:

pub struct Vec2<T> {
    pub x: T,
    pub y: T,
}

would do the exact same thing, and then I just implement the index operator and it can also act as an array, so I have now changed my code to that kind of struct instead of the newtype, simply because I feel that my code is a lot cleaner and readable that way.
Am I maybe missing some other benefit of the newtype that should make me consider reverting back?

Concerning the bounds checking for the index function, I did consider just clamping the value between 0-1, but I opted to allow for a run-time panic instead, simply because I figured that for the run-time panic to occur It must mean that there is already bounds check code inserted by the compiler being run whenever I use the index operator, and as such I did not want the extra performance hit of possibly doing two bounds checks every time that operator is used.
Or am I missing something about how indexing and panicking works here?

(P.S, I like to have my length_squared functions return slightly larger values than you “Normal” people do! :wink: )


#6

Nope. Newtype is usually used to provide semantic type safety on top of the underlying value.

I think @kornel’s point was that you may get a bounds check perf penalty as-is since array indexing is bounds checked by default. But if you pass constants for the index and compiler inlines, it’ll likely get removed.

I’m not sure it’s worthwhile to implement Index for the Vec2 though unless you need to abstract over that type of thing. I think now that you got rid of the array it’s moot.


#7

Ahh! Right! My bad.
Well that makes sense, and while I will absolutely mostly use Vec2.x and Vec2.y I do sometimes find myself in situations where i want to iterate over vectors with an index, so the index operator does have it’s uses for me in soma cases.


#8

If you use a struct containing x and y then it’ll be impossible to accidentally reference the wrong element and get an index error. Additionally, all three of these definitions for a 2D vector are laid out identically in memory:

struct Vector<T> { 
 pub x: T, 
 pub y: T,
}

type V2<T> = [T; 2];
pub struct Vec2<T>(V2<T>);

// tuple struct with two elements
struct Vector<T>(T, T);

Note that your type V2<T> = [T; 2]; acts more like a C-style typedef. It doesn’t actually create a new type, it just makes the signature easier to type out. For a 2D vector I’d say the middle version looks the least idiomatic. I’d rather all my mistakes being picked up at compile instead of a panic at runtime. Plus not having the compiler insert bounds checks and extra code to deal with panics should give you better performance (less instructions run, vectorisation, etc).

If you want to generalise the vector over multiple dimensions (e.g. 1D, 2D, 3D, …) then using a statically sized array would make sense.

You’ll also find that in Rust we tend to use a lot less array indexing than other languages.


#9

Consider iterating using an Iterator instead of indices, if possible. You can define something like the following to make it easy:

pub struct Vec2<T> {
    pub x: T,
    pub y: T,
}

impl<T> Vec2<T> {
    fn iter(&self) -> std::iter::Chain<std::iter::Once<&T>, std::iter::Once<&T>> {
        std::iter::once(&self.x).chain(std::iter::once(&self.y))
    }
}

It looks a bit ugly without impl trait. If you’re willing to be on nightly for a bit, you can instead do:

#![feature(conservative_impl_trait)]
...
fn iter<'a>(&'a self) -> impl Iterator<Item=&'a T> {
     std::iter::once(&self.x).chain(std::iter::once(&self.y))
}

Note that without the explicit 'a lifetime the compiler ICEs :frowning:.


#10

Yes of course!
Implementing an iterator is probably a better way to do it, and then i will not have to deal with the bounds checking performance hit of the index operator at all!
Thanks for the tips. :slight_smile:


#11

Yup, no bounds checking and no chance of accidentally fat fingering the wrong index and getting a runtime panic :slight_smile:.

The compiler generally handles iterators pretty well. Here’s a contrived example: https://godbolt.org/g/jQFXzo - all that iteration and summation is constant folded.

Interestingly, if you manually iterate and sum up the elements then the compiler doesn’t constant fold - you get an actual loop. I think I vaguely recall a github issue about things like this though.