Array coercion to slice vs const generics impl

I recently learned that most of the methods I had assumed were implemented on array are actually implemented on slices, and that even though arrays coerce to slices they don't impl Deref. Instead the coercion appears to happen through a separate built-in language rule specific to arrays.

I am wondering to what degree this is an artifact of history (const generics not originally being available) and whether it makes any sense to revisit the design.

How I imagined it to work before reading the docs is that there would be a trait abstracting over contiguous blocks of T, call it Contiguous<T>. Both arrays of T and the slice type [T] would impl the trait. The trait would have a size method, which in the array implementation would just be returning a constant, and for slices would return the stored sliced length. Most of the method implementations that are currently slice methods would be cut and pasted to instead be the default implementations for the trait methods.

What this buys you is that when you are working with arrays, since the size method would return a constant it would pretty much always be inlined, and so you would get generated array methods that are able to assume specific constant sizes. If I understand right in the current implementation only when a slice method is small enough that it itself gets inlined into its caller is there a chance that the compiler will figure out that you are actually dealing with an array of a specific size and potentially take advantage of that for optimizations, like completely unrolling a loop.

I haven't benchmarked anything, and there is the potential for emitting length specific versions of methods to bloat binaries more than the current approach which only emits type specific versions. But it does seem like maybe some optimization is being left on the table?

Also possibly going this route would make it so that arrays could just implement Deref, instead of needing a separate language rule for the coercion. I'm going to guess that there are other unintended effects from doing that though...?

Perhaps consider that you can’t move values out of an array. When you call a function over an array, the compiler points the call to a slice. Any time you have a value, but only need a borrow, the compiler treats one as a subtype of the other. The mechanism is not so much a cast, but converts the type to a borrow “on your behalf”.

Secondly, I don’t think there is a difference between a borrow of an array and a slice; they are one in the same (perhaps mistakingly on my part, but I read into your question that you saw a difference somehow).

A Vec on the other hand, implements Deref. It uses an allocator to instantiate and manage the contiguous memory. Similarly, most functions are implemented on a slice. Deref is required because a ref to/borrow of a Vec is not the same as a slice. The compiler knows to call deref because at first it can’t find the function on Vec, so then calls deref to “try again”. That “magic” comes with the dot operator. But note, other than there being no reason to, you could implement the same functions on Vec. It would be a wasted effort for many reasons.

CoerceUnsized.

It might be interesting to map out exactly how that trait would need to look to support all the methods. E.g. when return types differ, you'd need an associated type. I'm doubtful that it would ever be accepted in std though -- method calls could be either non-generic or generic (and monomorphized) based on whether or not the trait was in scope.

The general coercion from an array to a slice is still going to be desired/useful. You don't always want to be generic over/monomorphize on array length, and when you coerce to a slice, you get not only slice's methods but all of slice's trait implementations.

And, it turns out, things get tricky when the coercions interact and the desired functionality differs. See for example the work around IntoIterator for arrays, which will probably happen over an edition boundary:

  • The current tracking issue, which contains the main incompatibility notes and some links to previous attempts
  • An ongoing experiment to allow trait method dispatch to differ between editions to maintain backwards compatibility while still implementing IntoIterator for arrays

And also the motivation in defining split_array for arrays now, even though the desired behaviour cannot be implemented for stable yet.

In many cases, constant propagation will still let the known length be used in optimization, as long as it's not an out-of-line slice call. These are often inlined though, so it tends to work well.

Err, I got this backwards. You'd never get the trait method for slices due to the inherant method. (The inherant methods can't be dropped due to backwards compatibility.)

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.