Armin Ronacher gave a talk recently where he describes a technique for speeding up compile times of generic functions by moving the function boundary of the generic parts away from the concrete business logic of a function.
His discussion of this technique can be viewed at the 21:40 mark of the recording of the talk.
To summarize here briefly (an any errors in my explanation are mine, not Armin's), you have some function that you want to write so that it can take anything that implements a particular trait:
fn very_big_function<S: ToString>(some_data: S) {
let s = some_data.to_string();
// hundreds of lines of business logic
}
In your application, you call very_big_function
on a hundred different concrete types that implement ToString
. For every concrete type that the function is called, the compiler will potentially generate a complete copy of it with just the type-specific parts for that concrete type changed. Since very_big_function
is very big, this generates a lot of machine code. Generating all that code takes time, and also results in a larger binary in the application that's finally compiled.
The technique he describes involves pushing the business logic of very_big_function
downwards and making it only take a single concrete type, while pushing the generic part upwards and keeping it's implementation as minimal as possible.
// generic function only calls to_string, and the business-logic function
fn generic_part<S: ToString>(some_data: S) {
very_big_function(some_data.to_string())
}
// business logic function now only takes strings
fn very_big_function(s: String) {
// hundreds of lines of business logic
}
Now there will be only one copy of very_big_function
, and each function per your hundred concrete types are limited to converting the concrete type to a string and calling the very_big_function
. Playing with this locally, I can confirm that the resulting binaries are smaller when the concrete and generic parts are separated like this.
What is the name of this technique?