I figured out that after compiling my projects gets a huge amount of my storage ( 1GB and above) and I'am just curious that is there a way to make project size smaller or not?
sth like share some most-used dependencies or sth like that?
I would generally run
cargo clean on a project after I'm done working on it. So at least I'll only have one (or a few commonly used) project(s) taking up the extra disk space of compilation artifacts at a time.
Disabling debug info helps a lot. With build overrides you can disable it only for dependencies.
debug = 1
debug = false
You can apply same to
release profile, and also set
incremental = false.
cargo tree -d
to see if you have duplicate dependencies.
target dir after you make these changes, because the previous bloated files stay anyway.
You might also try sccache for sharing dependencies between projects. Apparently, it's made for that use case .
Sccache trades reduced compilation time for increased disk usage. It copies the compiled rlibs into every project specific target dir when cargo runs on the respective project and in addition it keeps a copy in it's own cache. AFAIK it doesn't use hardlinks or anything like that to save space.
That is accurate; sccache works essentially by wrapping rustc to check the shared compilation cache first. But if it finds something there, rustc/cargo uses it normally, i.e. writes the compilation artifact(s) into the target directory.
If you want to share target artifacts between projects, your best bet is going to be a workspace, as within a workspace cargo can actually unify dependency versions so you aren't using email@example.com in one project and firstname.lastname@example.org in another. If you can't / don't want to use a monolithic workspace, setting
$env:CARGO_TARGET_DIR can redirect each project's
./target to some shared directory. This won't help projects select the same dependency configuration, but when they do, there'll only be one copy of the dependency compiled. Then with a
cargo clean every now and then (e.g. whenever you
rustup update and the old artifacts are unusable anyway), the target directory size won't grow completely unboundedly.
Hm, sounds like room for improvement. In particular I'm thinking that on file systems like btrfs and zfs you could use data deduplication using reflinking, (basically copy on write, and thus safer than hardlinks). I might take a stab at implementing that since I do use sccache and btrfs.