I see that your workspace has two packages (hence at least two crates), vibe_core
and vibe
. Which one are you editing in this scenario?
It's likely that you could achieve some gains by splitting one or both of these crates into multiple crates, which do not depend on each other if possible (to minimize the number of recompiled dependents for any given change). rustc
's incremental compilation can save a lot of work when small changes are made, but separate crates are even more able to be skipped when unchanged.
(But splitting crates also means more work for the compiler (and you) overall due to dealing with the boundaries between crates.)
It can also help to make code non-generic when possible — generic code’s compilation has to be deferred to its usage site when the generics become concrete types, so it may end up (re)compiled in a more downstream crate than it was defined. An example of this I found in your repo is in Downloader::download()
:
pub async fn download<F>(&mut self, url: &str, path: PathBuf, on_progress: F) -> Result<()>
where
F: Fn(u64, u64) -> bool,
on_progress
probably is not called often enough to significantly benefit from monomorphization and inlining, and the function is already borrowing several things, so you can replace it with a dyn Fn
:
pub async fn download(
&mut self,
url: &str,
path: PathBuf,
on_progress: &dyn Fn(u64, u64) -> bool
) -> Result<()> {
This non-generic function will be compiled to machine code once as part of vibe_core
, rather than once for each place you call it. This means that it won't need recompiling when you change vibe
, and it won't be (partially) compiled again for each call site. All that is less work for the compiler and linker.
Besides speculatively de-genericing your code, you might want to look for functions that generate a lot of machine code, because those are functions that might be slow to optimize, and definitely make more work for the linker. cargo-bloat
can tell you what the biggest functions in your program are, and cargo-show-asm
can dump the assembly (interleaved with Rust code if you wish) to see why specific functions are suprisingly big.
Also, if you haven't already, try cargo build --timings
to get more information on compilation time. It's most interesting if you have multiple library crates, but there's some useful information no matter what.