Two months ago, I decided to see if a Rust version of our bioinformatics library could be as nice for users as the Python version. The answer is “yes” but wasn’t easy. This free article lays out what I learned:
I’m most proud of getting Python-style fancying indexing working in Rust. This means users can specify which data to download with an index number, any array-like-collection of numbers, any range-like thing, or via Booleans. To make example code simpler, the library also includes a function to download samples files to a cache directory controlled by an SHA hash.
I'd love to discuss any part of the project, other folk's experiences trying to make user-friendly APIs, or what rules you'd suggest.
Rule 2: Accept all kinds of strings, paths, vectors, arrays, and iterables.
The advice for implementing this rule should mention the pattern of having a generic function call a non-generic function containing most of the implementation, to minimize the code size and compilation time added by the generic part.
Finally, if your Enum is used in a regular function, document that your users must use .into() when calling the function.
Functions can accept impl Into<YourMostGeneralType> rather than requiring the caller to do it — just as in your rule 2.
Use builders, because you can’t use keyword parameters.
Builders aren't really necesary for imitating keyword arguments. An easier solution is to simply define a config struct with named fields and implement Default on it. Then consumers of the code can use FRU syntax to override only some of the fields. (This also has the added benefit of materializing the config so that you can e.g. serialize/deserialize it, should you need to support that.)
I have been stumbling upon that advice elsewhere already. I wonder: Isn't this something the compiler should do rather than every programmer of generic functions? Is the compiler really stuggling so much with big generic functions that we must support it (and make our code less readable)? Is it planned to improve that situation or is this problem not solvable in an automated fashion (for some reasons that I may not be aware of)?
I think I remember that even std uses these splits into generic and non-generic part, so it seems to be really important.
Presumably such an optimization would not actually introduce dynamic dispatch. What I imagine it would have to do is find a “tail” of the function that can be cut off and compiled separately, where the “tail” is whatever code contains no further uses of the generic type. So, for an fn foo(x: impl Into<String>), it would automatically find the point just after x.into(), and for an iterator it would find the point after the end of the for loop or whatever.
Then the place where a heuristic is needed for optimization would be deciding whether the “tail” should be compiled inline (like it always is now) or like a separate function (which adds the costs of a function call boundary but reduces the code size).
I'm not sure how such generic functions are compiled when dealing with different modules or crates. I would assume that for each type, the function must be recompiled. If the function gets recompiled too often (e.g. two times or three times?), the compiler could switch to a different strategy. But I don't know enough about the compilation process for Rust to really understand how generics are compiled.
I think that function inlining is also done elsewhere and something very common (and also uses heuristics I think?). I feel like Rust needs the opposite here (as you explained): avoiding inlined non-generic code but creating a separate function for the tail. Not sure how that could be called. Maybe "generic function tail extraction".
The reason why I brought this up: I have written some libraries that are deliberately generic to avoid having to allocate Vecs, for example. Consider sandkiste::Function::call, which expects a variable number of arguments (as it calls a function defined in a scripting language). You can use it like this: func.call(some_vec), but also like this: func.call().
I do this by accepting a generic type A: IntoIterator<Item = T> (where <A as IntoIterator>::IntoIter: ExactSizeIterator) as argument list instead of a Vec<T>.
This makes using the library much nicer. But apparently it will bloat up code size. If I do this "tail extraction" manually, then my library code will be less readable. It's already a lot of noise to write:
fn foo<A>(/* … */ args: A)
A: IntoIterator<Item = SomeType>,
<A as IntoIterator>::IntoIter: ExactSizeIterator,
fn foo(args: Vec<SomeType>)
I feel like in a dilemma (or trilemma) here. Do I bloat up the source code of my library? (And if I do, should I use the pattern @kpreid suggested, which will make my library code even more verbose?) Or do I just keep things simple (which would require using vec! instead of  when calling my function, which is runtime overhead, I believe)?
I feel like something is missing on the compiler-side to solve this trilemma. Or I just have to accept to write verbose source code for library code. Or I accept bloating the binary size.
Note that the former technically has the advantage of also allowing other types that implement IntoIterator, while the latter only allows Vec.
What I do when using this pattern is extend it a little bit. The extension is in that I write the non-generic version (aka the tail) in a seperate fn directly below the generic version.
In addition, it's named the same as the generic version except it starts with an underscore _.
What this allows is getting some of that lost clarity back, because all you have to do to see the intent (and how various generic args are used) is look at the non-generic version.