I wasn't able to compile app with RUSTFLAGS="-Zself-profile -Zcodegen-backend=cranelift" CARGO_PROFILE_DEV_CODEGEN_BACKEND=cranelift cargo +nightly rustc to check internals of rust compilation times
due error(I already added cargo-features = ["codegen-backend"] at the top of Cargo.toml)
RUSTFLAGS="-Zself-profile -Zcodegen-backend=cranelift" CARGO_PROFILE_DEV_CODEGEN_BACKEND=cranelift cargo +nightly rustc
error: config profile `dev` is not valid (defined in `environment variable `CARGO_PROFILE_DEV``)
Caused by:
feature `codegen-backend` is required
The package requires the Cargo feature called `codegen-backend`, but that feature is not stabilized in this version of Cargo (1.77.0-nightly (363a2d113 2023-12-22)).
Consider adding `cargo-features = ["codegen-backend"]` to the top of Cargo.toml (above the [package] table) to tell Cargo you are opting in to use this unstable feature.
See https://doc.rust-lang.org/nightly/cargo/reference/unstable.html#codegen-backend for more information about the status of this feature.
I don't know if this solves your problem (I never had slow build.rs issues), throwing some suggestions since some of my posts got linked. My overall strategy for minimizing Rust compile time is:
Are all my CPU cores being utilized?
Yes => Need to buy more cores.
No => We can speed up compile time via increasing parallelism.
How do increase parallelism ?
Run cargo build --timings -- stare at the graph, look at points where CPU cores has low utilization. This is because there is a small set of crates {foo, bar, blah} that is compiling, and everything else is WAITING because they depend on {foo, bar, blah}.
This then becomes a game of "can I break foo, bar, blah into smaller crates" ? Can I make things that depend on "foo, bar blah" not depend on them or depend on something smaller ? This ends up being a game of moving around structs / enums / traits so that your "dependency" graph of crates is as flat as possible.
Shallow DAG = lots of things can run in parallel
Deep DAG = lots of stuff running in serial
In the end, I achieved non-incremental (recompile every crate why any dependency changed) build times of around 5k-10k LOC / second. If you significantly beat this, I'm interested in learning how (and how many cores you are using).
Also, in my experience, "dumb" macro_rules! and procedural macros (that I wrote) did not hurt me as much; but some advanced #[derive(...)] from popular packages (not naming names w/o benchmarks) were a bit slow.
In my experience, generics could also be really expensive. "zero cost abstractions" often have compile time costs, so lots of fn blah<T: ...>(...) were replaced with fn (blah: &dyn T) when possible
Most of this links are not very useful, because they apply mostly to the situation of compiling the entire project with various additional optimizations.
Since it's a check build that's all time spent in the frontend. Running with RUSTFLAGS="-Z time-passes" might provide some additional information.
If slint generates code from scratch I wonder if incremental compilation even makes sense or most of it gets invalidated on each build. Try comparing incremental and non-incremental build times.
For the build script, the only thing done is recompiling the slint ui using the slint_build crate. The build script should only be rerun when actually changing the ui definition as cargo:rerun-if-changed is used by slint_build.