First, let's address the overall question:
Mostly. In particular, given the following functions:
fn a(x: &Vec<u32>) { ... }
fn b(x: &[u32]) { ... }
You should prefer b
over a
. a
requires that you pass it a Vec
, whereas b
can be used with many more types that are able to coerce to or produce a &[u32]
.
That said, there are rare occasions where you might want to use a
, but that would only be because you need to access some information that b
doesn't provide. The only one I can think of is if you care about the capacity
of the underlying Vec
. If you don't care about the capacity, I wouldn't bother with a
.
There's no "check" going on. What does happen are pointer dereferences. Using &Vec<_>
instead of &[_]
does involve an extra pointer indirection. This can be slower, but how much slower depends on the exact code you're writing, the machine you're running it on, and how well the compiler does at optimising your code.
I wouldn't be worried about the overhead, unless you have done performance profiling that indicates it is a problem.
However, there's generally no reason to use &Vec<_>
over &[_]
anyway, so it's a bit of a moot point.
One more point to make:
It's worth noting that the following works:
fn a() {
let v: Vec<u32> = vec![1, 2, 3];
b(&v);
}
fn b(x: &[u32]) {
println!("x[0]: {}", x[0]);
}
Note that you don't have to explicitly say &v[..]
. The compiler knows it can convert a &Vec<T>
into a &[T]
automatically, so it does just that.