Idiomatic way to write a function that takes a slice of known length

Let's say I have a function that needs a slice of exactly three i32 values. Would I write it like this:

fn foo(values: &[i32; 3]) { /* … */ }

Or would I write it like this:

fn foo(values: &[i32]) { assert_eq!(values.len(), 3); /* … */ }

The second way causes a little bit of runtime overhead and won't catch certain errors at compile time.

However, the first way can make passing slices of length 3 (known at run-time) difficult:

#![feature(array_chunks)]

fn foo(values: &[i32; 3]) {
    println!("{}/{}/{}", values[0], values[1], values[2]);
}

fn main() {
    let vec: Vec<i32> = vec![5, 10, 100];
    let slice: &[i32] = &vec;
    // Both of these don't look very nice:
    foo(<&[i32; 3]>::try_from(slice).unwrap());
    foo(slice.array_chunks().next().unwrap());

    let longer: Vec<i32> = vec![10, 20, 50, 100, 200, 500, 1000];
    // foo(&longer[1..4]); // won't work
    // Again we go for something ugly here:
    foo(<&[i32; 3]>::try_from(&longer[1..4]).unwrap());
    foo(longer[1..4].array_chunks().next().unwrap());
}

(Playground Playground) Edit: replaced &< with <&.

I feel like in past it was common to do length checks at runtime in Rust. But maybe with const generics this is changing? What's the usual way to go?

Edit: Also note that try_into or try_from may allow additional conversions that might be unwanted.

In this particular case, fn foo(values: [i32; 3]) should be good enough.

In most cases, I'd say that slice lengths are not particularly "inferrable" at compile time, even with const generics.

Note, however, that &longer[1..4] can't be passed because it is of wrong type:

fn foo(values: &[i32; 3]) {
    println!("{}/{}/{}", values[0], values[1], values[2]);
}

fn main() {
    let longer: Vec<i32> = vec![10, 20, 50, 100, 200, 500, 1000];
    foo(&longer[1..4]); // won't work
}

(Playground)

Error:

error[E0308]: mismatched types
 --> src/main.rs:7:9
  |
7 |     foo(&longer[1..4]); // won't work
  |     --- ^^^^^^^^^^^^^ expected array `[i32; 3]`, found slice `[i32]`
  |     |
  |     arguments to this function are incorrect
  |
  = note: expected reference `&[i32; 3]`
             found reference `&[i32]`
note: function defined here
 --> src/main.rs:1:4
  |
1 | fn foo(values: &[i32; 3]) {
  |    ^^^ -----------------

For more information about this error, try `rustc --explain E0308`.

I guess that's because there is no "sized coercion" as an opposite of unsized coercions.

1 Like

This can slightly be improved with TryInto.

foo(slice.try_into().unwrap());
5 Likes

Yes you're right. I was suggesting to change the entire manner of handling from vectors/slices to arrays. But often you can't choose, you have either a vector or an array. In that case, passing a slice and doing a assert is more general option (ie, can handle both cases).
Regarding the runtime cost, I wonder if it is going to be too much (or anything at all). If you look at Godbolt:

example::sum:
        push    rax
        cmp     rsi, 3
        jne     .LBB0_1
        mov     eax, dword ptr [rdi + 4]
        add     eax, dword ptr [rdi]
        add     eax, dword ptr [rdi + 8]
        pop     rcx
        ret
.LBB0_1:
        lea     rdi, [rip + .L__unnamed_1]
        lea     rdx, [rip + .L__unnamed_2]
        mov     esi, 33
        call    qword ptr [rip + core::panicking::panic@GOTPCREL]
        ud2

Given how well modern branch predictors work, it might be just one extra compare.

Are you asking this out of curiosity or do you have this cropping up as a problem somewhere? Coz, that'd definitely be interesting to know.

Yeah, I wanted to make more clear which conversion is done (and not allow the compiler to do any conversion). But even using try_from, the problem is that any convertible type will be accepted there :slightly_frowning_face:.

struct Dummy;

impl<'a> TryFrom<Dummy> for &'a [i32; 3] {
    type Error = ();
    fn try_from(_: Dummy) -> Result<&'a [i32; 3], Self::Error> {
        Ok(&[1, 2, 3])
    }
}

struct Gummy;

impl<'a> TryFrom<Gummy> for &'a [i32; 3] {
    type Error = ();
    fn try_from(_: Gummy) -> Result<&'a [i32; 3], Self::Error> {
        Err(())
    }
}

fn foo(values: &[i32; 3]) {
    println!("{}/{}/{}", values[0], values[1], values[2]);
}

fn main() {
    let longer: Vec<i32> = vec![10, 20, 50, 100, 200, 500, 1000];
    foo((&longer[1..4]).try_into().unwrap());
    foo(Dummy.try_into().unwrap()); // will do a "real" conversion here, which might not be wanted
    foo(Gummy.try_into().unwrap()); // will try a "real" conversion here, which might not be wanted
}

(Playground)

I'm not really concerned about the runtime overhead. I believe it's very small. I'm more worried about code readability and clarity of the interface.

Both ways seem to have disadvantages:

  • Working with references to arrays requires unhandy conversions as many functions often return slices.
  • Working with references to slices will not catch length mismatches at compile time.

My use-case is an SDR, where I pass blocks of certain size from one part of the program to another (usually references to slices/arrays of Complex<f32>). Still work in progress, and I'm trying to figure out the best way regarding how to use the type system.


In my current code, I don't actually use slices, but some smart-pointer to a Vec, which allows re-using the Vec when it's no longer needed:

#[derive(Clone, Debug)]
pub struct Chunk<T> {
    buffer: Arc<Vec<T>>,
    start: usize,
    end: usize,
    recycler: mpsc::UnboundedSender<Vec<T>>,
}

impl<T> Chunk<T> {
    fn new(buffer: Vec<T>, recycler: mpsc::UnboundedSender<Vec<T>>) -> Self {
        let len = buffer.len();
        Chunk {
            buffer: Arc::new(buffer),
            start: 0,
            end: len,
            recycler,
        }
    }
    pub fn discard_beginning(&mut self, len: usize) {
        assert!(len <= self.end - self.start, "length exceeded");
        self.start += len;
    }
    pub fn separate_beginning(&mut self, len: usize) -> Self {
        assert!(len <= self.end - self.start, "length exceeded");
        let (start, end) = (self.start, self.start + len);
        self.start = end;
        Chunk {
            buffer: self.buffer.clone(),
            start,
            end,
            recycler: self.recycler.clone(),
        }
    }
}

impl<T> Drop for Chunk<T> {
    fn drop(&mut self) {
        if let Ok(buffer) = Arc::try_unwrap(take(&mut self.buffer)) {
            self.recycler.send(buffer).ok();
        }
    }
}

impl<T> Deref for Chunk<T> {
    type Target = [T];
    fn deref(&self) -> &Self::Target {
        &self.buffer[self.start..self.end]
    }
}

I could use const generics here to give the Chunk a size known at compile-time, but I'm not sure if that would be the usual way to go. (Plus it will not allow me to use dynamic chunks sizes, which might be handy in some cases.)

If that's the case then I'd just pass a slice and add an assert_eq!(values.len(), 3) at the top.

If this crops up often, it might be a good idea to add a special getter to your Chunk which does the try_into() stuff automatically (e.g. fn get(&self, start: usize) -> Option<&[T; 3]>). Keep in mind that indexing a slice with an out-of-bounds range will panic, so it wouldn't be much different from a chunk.get(42).unwrap() in practice.

My advice would be to think about how fundamental the size is to the problem domain, and only use the array version if arbitrary lengths make no sense.

For example, representing an RGB colour as Vec<u8> would just be weird, so taking it as [u8; 3] instead seems way more reasonable. If someone has a bunch of colours in a vector, asking them to chunk it up into real arrays before calling you sounds very reasonable to me.

Whereas if you're doing something like an FFT, it's reasonably defined for various lengths, so I might write it taking a slice even if I don't support every possible length (right now).

What's the right answer for SDR? I have no idea; I know basically nothing about the problem domain.

2 Likes

I think that is a good approach to the (general) problem.

I think using slices will cause least pain. I have a lot of places in my code where the length of the chunk really doesn't matter, so using arrays would require casts in a lot of places, I guess.

Going back to the general question (unrelated to the use-case), I still find it a bit sad though that this won't compile:

But maybe there is no easy / non-confusing way to make it work.

The reason it's the way it is right now is that you're thinking about the values, but the compiles works in terms of types.

1..3 and 1..4 both have type Range<usize>, so they both return the same type from indexing: &[_]. That's handy in that it means things don't stop compiling if you move the range to be a parameter, for example. You'll see this in various places in Rust, like how if false still gets its block checked, since the compiler just looks at its boolness, and similarly in the difference between loop { … } and while true { … }. This is handy for debugging -- you don't get drastically different behaviour just because you made something a literal for a bit.

But it's certainly true that there's a need for something nicer here. What that should look like is still an open question, though, I think. For example, maybe instead of s[1..4] it's s[1..].first_chunk::<3>() (https://github.com/rust-lang/rust/pull/95198), where it's a const generic parameter -- and thus in the type system -- instead of a runtime value.

2 Likes

In such case when I wanted both stricter type and readability (especially in tests), I usually define tiny "ensure or panic" functions.
This is analogous to slice.try_into()?, but ensure_arr3(slice) esapes from the current context not by returning Err but by panicking.
This does not change the program's semantics so much, and I believe the performance overhead is quite small.

fn foo(values: &[i32; 3]) {
    println!("{}/{}/{}", values[0], values[1], values[2]);
}

/// Ensures the length of the given slice is 3.
///
/// # Panics
///
/// Panics if the length of the slice is not 3.
#[inline]
#[must_use]
fn ensure_arr3(v: &[i32]) -> &[i32; 3] {
    v.try_into().unwrap_or_else(|_| {
        panic!("expected slice with just 3 elements, but got {v:?}");
    })
}

fn main() {
    let vec = vec![10_i32, 20, 50, 100, 200, 500, 1000];
    foo(ensure_arr3(&vec[1..4]));
}
1 Like

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.