There is also .as_slice(), and &vec[..] (and explicit use of the Deref trait, for which &* was sugar)
Of all those, .as_slice() would technically be those most inherent way of performing the conversion.
A Vec<T> is conceptually very similar to a Box<[T]> (owned pointer to a heap-allocated slice of elements). From there, it gets the "smart pointer" (to [T]) API and ergonomics:
Mainly from Deref: Box< Pointee > : Deref<Target = Pointee>, so &* on that Box shall yield you a &Pointee. In the case of Pointee = [T], and back to a Vec, this gives us Deref::deref(&vec) or the sugary &**(&vec) (or &*vec once simplified) all yielding a &[T].
For the sake of consistency / non-surprising behavior, since &(*vec)[0] then works thanks to the above Deref impl (the full syntax being &(&*vec)[0]), we get the Index implementations to avoid having to provide that internal parenthesized * which is noisy: &vec[0]. For the case where the index is not a usize, but a RangeFull (..), the contract on slices dictates that &slice[..] be a "full indexing" operation, (re)yielding the slice as a whole. From all this stems that &vec[..] is a full-range indexing operation on the slice contents the Vec points to, hence that other way to obtain a slice.
Then, some generic API may require types which don't necessarily have to be like a (reference to) a slice &[T], but instead, types which can be viewedAs such: AsRef<[T]>. For instance, a Vec<u8>, a Box<u8>, a String, and a &str can all be viewed as a reference to their heap-allocated byte contents, and they all are AsRef<[u8]>.
This could be useful for, say, printing such byte contents:
Hence why Vec<T> : AsRef<[T]>. But such API / functionality is there for compatibility with generic APIs, it's not really intended for direct use. That is, I would find it a bit odd to see let slice: &[u8] = vec.as_ref();, precisely because there exist less generic and/or less noisy syntaxes to achieve this.
The same regarding AsRef applies to the Borrow trait, which is a very similar trait, but for expressing some extra properties about its implementors. See its docs for more info.
So in non-generic context, you thus have, in my personal order of preference:
There's also often the option to use just &vec instead of &*vec; this would use implicit (deref-)coercion of &Vec<T> to &[T] which can happen whenever there's clear enough type information specifying that &[T] is what is needed. (E.g. a let slice: &[u8] = &vec; with explicit type, or passing it to a function/method call foo(&vec) whose type is fn foo(&[T]).)
There are several conversion traits in the standard library which each have their own uses, although their use may not be evident until you need to work with generics.
In your particular example we have:
the AsRef trait is a cheap reference-to-reference conversion, typically used when your type wraps or can be interpreted as another (e.g. str: AsRef<Path> means strings can be used as paths)
the Borrow trait lets you borrow a value as something else, the only place I've really seen it used is with HashMap so you can use &str when looking up values in a HashMap<String, _>
A Vec<T> is also a smart pointer so it implements the Deref trait
You've also got the non-generic as_slice() method, and can leverage the fact that indexing with an open range (..) will give you a reference to the full slice.
You can't. To get &[T], you must somewhere have [T] - for example, in the heap storage backing the Vec<T>. But if you have Vec<&T>, then in general you have only [&T].
No, it doesn't have to do with optimizations. As others have explained, it is needed for supporting various use cases in generic code.
No. First of all, there's no way to directly (without copying) convert Vec<&T> to &[T] because the two have totally different memory layouts. Second, all the aforementioned approaches (.as_ref(), dereferencing, &[..], .borrow()) are exactly equivalent and do precisely the same thing.
Thanks, my IDE was not syntax highlighting so I trusted it. You're right, whoops!
And about my "intuition", I read somewhere that Rust's strong typing is what gives LLVM a competitive edge in optimization (in some cases). So having support for generic code I consider as a manner of strong typing. In other words, the compiler has a chance to optimize knowing the types.