While trying to convert a Vec<u8> to a &[u8], i found that two methods could be used for this purpose: as_ref and as_slice.
These methods do the same thing (convert a Vec<T> to a &[T]), and even have the same code (as_ref, as_slice), so why is there two of them ?
I understand that as_ref provides the functionnality of the AsRef trait, but as_slice doesn't seem to provide anything more than as_ref, so why does it exist ?
Is it for historical reasons, for having a more explicit name (and only using as_ref when needing the AsRef trait), or for something else entirely ?
AsRef is generic, so it can confuse inference, in which case a nicer option is to simply use as_slice() (which is not generic), instead of having to annotate AsRef with a turbofish.
There’s even more ways. Way more ways… Assume we have v: Vec<T>
First up, the Deref implementation. Allows you to do &*v to get a &[T]. If you were to bring Deref into scope, you could also _call v.deref(), though as far as I’m aware, calling its method manually is not really the most intended use-case for the Deref trait.
Second, there’s slicing syntax. AFAIR, that’s the approach that the book teaches. Slicing with a full .. range will efficiently turn Vec<T> into &[T], too. I.e. write &v[..].
Third, there’s also the Borrow trait which Vecimplements just like the AsRef trait; so with that in scope, you could also write v.borrow(), given that type inference doesn’t become unhappy by the presence of the impl<T> Borrow<T> for T blanket impl.
Fourth, back to dereferencing: Dereferencing behind a reference is an implicit coersion which means that if the target type is known, e.g. since you’re passing the result to a function expecting&[T], you can simply write &v.
Explicit dereferencing (&*vec) or the conversion method (vec.as_slice()) where needed;
vec.as_ref() never (edit: I mean I would not use it to go explicitly from &Vec<T> to &[T], but only to go from some generic/unknown type &U to &[T]) (see also @CAD97's post on IRLO: "Semantics of AsRef").
Some toying with these:
use std::any::{Any, TypeId};
use std::sync::Arc;
fn main() {
let v: Vec<i32> = vec![1, 2, 3];
let b: Arc<Vec<i32>> = Arc::new(vec![4, 5, 6]);
let _: &[i32] = &v;
let _: &[i32] = &b;
let _: &[i32] = v.as_slice();
let _: &[i32] = b.as_slice();
let _: &[i32] = v.as_ref();
let _: &[i32] = b.as_ref(); // works, but through Deref-coercion, see `assert` at end
let _: &Vec<i32> = &v;
let _: &Vec<i32> = &b;
//let _: &Vec<i32> = &v.as_slice(); // fails as expected
//let _: &Vec<i32> = &b.as_slice(); // fails as expected
let _: &Vec<i32> = &v.as_ref(); // works because `Vec` has (also) a reflexive `AsRef` impl
let _: &Vec<i32> = &b.as_ref(); // works (with or without fixed #45742)
//let _ = v.as_ref(); // fails because `Vec` has (also) a reflexive `AsRef` impl
let x = b.as_ref(); // works due to unfixed #45742, but gives a `&Vec<i32>`
assert_eq!(x.type_id(), TypeId::of::<Vec<i32>>());
}
AsRef auto-dereferences if the inner type is a reference or a mutable reference (e.g.: foo.as_ref() will work the same if foo has type &mut Foo or &&mut Foo).
Note that due to historic reasons, the above currently does not hold generally for all dereferenceable types, e.g. foo.as_ref() will not work the same as Box::new(foo).as_ref(). Instead, many smart pointers provide an as_ref implementation which simply returns a reference to the pointed-to value (but do not perform a cheap reference-to-reference conversion for that value). However, AsRef::as_ref should not be used for the sole purpose of dereferencing; instead ‘Deref coercion’ can be used:
let x = Box::new(5i32);
// Avoid this:
// let y: &i32 = x.as_ref();
// Better just write:
let y: &i32 = &x;
I would use AsRef only for generic programming. (And even then I would think twice, as it's somewhat flawed, see my link to IRLO above.)