I'm passing a b"..." literal to a function that expects a &[u8], and in some contexts it's able to automatically perform that conversion and in other contexts it does not. Is there a language semantic at play here or is this a limitation of type inference? Or something else altogether?
The conversion from &[u8; N] to &[u8] is called Deref coercion, and inserting these coercions actually happens before full type inference runs, because type inference isn't really able to insert type conversions into the code.
I suppose that gives me a follow up question: What rule of deref coercion results in Data::new(b"abc") being ok, but Data::new_opt(b"abc").unwrap() being a compiler error?
This one isn't deref coercion, because arrays don't implement Deref. It's a builtin special coercion for arrays to slices. But otherwise yes, coercions run before type inference, so if things don't have explicit types they won't be coerced.
I believe the f forces it to Data<&[u8]> early on in the new case, and then the array can be coerced, but that link is broken by the .unwrap() in the new_opt case. (Whereas new_slice_opt always has that type.)
That makes sense and is consistent with all the different permutations' behaviors.
So in that case type-inference is an essential component of the behavior. Is this a language-level limitation of the inference, or an implementation detail? Is it worth filing a bug suggesting an improvement?
I mean, inference is for determining what the types of things are, and it fundamentally doesn't support cases where the same variable has different types in different places. And if you always insert the conversion with the option of having it result in the same type, then it will fail to infer types because there are lots of cases involving generics where both coercing and not coercing would type-check. This is similar to the question-mark operator sometimes having a hard time figuring out what the target error type is, because that operator always inserts an error conversion.