Best practices: Generic inputs vs generic outputs


There are two ways to do generic programming in rust, and they live at odds with each other:

  • You can take generic inputs. E.g. fn foo<T: Trait>(value: T). You see this in e.g. the Command builder API, and Path methods…
  • You can have generic outputs. E.g. fn foo<T: Trait>(value: i32) -> T. You see this in Iterator::collect, rand, as_ref, into

Wherever the two meet, you have type inference problems. Right now, I’m looking at a function in my code that looks like this:

impl<M> Structure<M> {
    /// Store new metadata in-place.
    pub fn set_metadata<Ms>(&mut self, meta: Ms)
    where Ms: IntoIterator<Item=M>,
        let old = self.meta.len();
        assert_eq!(self.meta.len(), old);

which I am currently considering replacing with:

impl<M> Structure<M> {
    /// Store new metadata in-place.
    pub fn set_metadata(&mut self, meta: Vec<M>) {
        assert_eq!(meta.len(), self.meta.len());
        self.meta = meta;


  • It avoids an O(n) memmove if I already have a Vec when calling it.
  • It is highly unlikely that I will ever have any other kind of container type.
  • If I have an iterator, I can easily write .collect().

In another example, I already very deliberately take a Vec:

impl Coords {
    pub fn with_carts(mut self, carts: Vec<V3>) -> Self {

In this case, I honestly contest that this signature actually makes it easier to call the function. For instance, it is possible that I might have carts: &[f64]. With the above signature, I can write coords.with_carts(carts.nest().to_vec()) without any type annotations (where nest() is a function generic in its output). If I put something more permissive as my input type, I’d have to add turbofishes or type annotations to get that to compile.

The C-GENERIC guideline recommends that functions take generic inputs. This seems like an obvious recommendation in most languages (where type inference is not a thing), and I imagine this is partly why it went rather uncontested, but I feel that some of its justifications are misguided.

Bizarrely, it cites “Inference” as one of the advantages of taking generic inputs:

(under Advantages)
Inference. Since the type parameters to generic functions can usually be inferred, generic functions can help cut down on verbosity in code where explicit conversions or other method calls would usually be necessary.

This is, as far as I can tell, 100% wrong! Type inference works backwards, using information from later function calls to constrain unknown types from earlier calls. As I see it, generic input arguments can only harm inference. And not just that; they also disable helpful coercions!

But there’s also another side to this. Too much emphasis on generic outputs can suck, too, especially when the output type is something like T: Trait (as opposed to e.g. Vec<T> where T: Trait which will at least allow you to call some methods without knowing T).

Lately I’ve been seeing a trend where people no longer put methods like as_xyz and into_xyz on their type; these are all written exclusively as Into and AsRef impls, which can sometimes be astoundingly diverse. frunk's HList does this for its conversions between HLists and tuples. To me, this is no substitute for a proper into_tuple method, because the output type here is something you certainly never want to have to write out by hand (and even if you wanted to, you can’t use a turbofish on into!).

How do others feel about this? What guidelines do you follow for when to use generic inputs versus generic outputs?

Edit: Okay, I might be conflating two different issues here, since I’m talking about the difference between generic inputs and outputs, yet I consider From and Into to be more or less the same thing.

My primary concern is with the dichotomy between:

  • having the caller of a function use generic functions to match an input type, versus
  • having the function accept a variety of inputs


One minor note: We actually specialize from_iter when given a vec::IntoIter to just reconstitute the vec from the iterator directly! We don’t special case extend when self is empty, but we probably should.

The primary context where I find myself using generic inputs like these is with AsRef<Path>, where I think it is ergonomically important to do that to support string literals, and the type inference problems you’ve noted don’t really come up.


In hindsight I probably shouldn’t have included that first example, since it doesn’t actually have any notable issues related to type inference and thus distracts attention away from my main point.


My own guideline would be to use generics only when it elevates the abstraction level (and thus some benefit is yielded, such as performance or flexibility) and/or provides better ergonomics, such as the example @sfackler mentioned. Otherwise, being generic for the sake of being generic is just unnecessary noise and cognitive overhead - this is true irrespective of whether a language has sophisticated type inference or not.

In the examples taking a Vec rather than something generic, I’d argue that allowing calling the function without needing to materialize a Vec as a benefit if calling it with a Vec isn’t the only expected scenario. In a library context, where you don’t know all the possible ways callers will use the API, it’s even more important to not force their hand (such as needing to materialize a certain collection type if that’s not actually material to the invoked API).