Make a type generic over u8 and u16 only?

You are concentrating on optimizing the wrong things. These things are trivial low-level things that are usually irrelevant. While optimizing these, you're reaching for unnecessary, complicated solutions (such as your Unsigned enum, traits, etc) that both complicate code and probably make it less performant.

Meanwhile it's not even clear what problem you are really trying to solve, what kind of operations you're trying to support.

I would recommend first worrying about what the operations are that you're trying to support, write some prototype, and only then it makes sense to micro-optimize at such a level.

6 Likes

I do appreciate you trying to save me the time. However, I care about learning more than just the code required to solve the problem. My original comment on not wanting to duplicate code has led to good discussion as to why duplication is likely inevitable in this case. Mentioning a slight concern for waste by upcasting to usize just to index allowed Bruecki to inform me that happens anyway - I didn't know that was always the case in the assembly.

I think the previous posts contain the answer to the generic part. I desire to add unsigned integers of different sizes together, and index into collections without having to cast or call .into(). It was explained why this can't be done, and to my fault I only mentioned the performance concern of it, but the big issue is in reading the code, not necessarily running it.

Take this for example, it's not real code and won't make sense, but it represents the operations I'm doing and in real code those casts add up to less readable code. It's only slightly better, visually, when replacing the casts with .into()

if ranges[indices[selected] as usize].end < indices.len() as u16 {
        ranges[indices[selected] as usize].end += 1;
} else {
        let mut i = ranges[indices[selected] as usize].end as usize;

        ranges[indices[selected] as usize].end += 1;

        while i < indices.len() {
            if ranges[indices[i] as usize].start == 4 {
                some_other_things[indices[i] as usize].0 += 1;
                some_other_things[indices[i] as usize].1 +=
                    ranges[indices[i] as usize].start;

                i += 1;
                continue;
            }
            ranges.push(i as u16..indices.len() as u16);
        }
 }

I mean none of this with a bad tone, I simply wanted to point out that it's not always about pragmatism and avoiding premature optimization.. Sometimes a person just wants to seek out the objectively best case, and learn where or why they have to compromise.

3 Likes

The way to do this is to implement Index and IndexMut with the type you want for indexing. For example:

use std::ops::{Index, IndexMut};

struct Thing {
	indices: Vec<u16>,
}

impl Index<u16> for Thing {
    type Output = u16;
    
    fn index(&self, i: u16) -> &u16 {
        &self.indices[usize::from(i)]
    }
}

impl IndexMut<u16> for Thing {
    fn index_mut(&mut self, i: u16) -> &mut u16 {
        &mut self.indices[usize::from(i)]
    }
}

// Now you can index `Thing` with `u16`:

fn f(mut thing: Thing, index: u16) {
    if thing[index] < thing[16] {
        let a = thing[index];
        thing[a] = a;
    }
}
3 Likes

the reason indicies are usize is that they are an offset to be added to the usize pointer of the slice. So effectively you are adding two usize to get the pointer of the index, hence the cast to usize.

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.