I have some generated Rust code that makes heavy usage of enums, traits, bitflags, match, etc., and it is still in development so there will be many more variants added to all the enums and match statements. Recently my additions resulted in rustc running out of my memory on my development laptop (linux, having only 16 GiB), and I've resorted to using my desktop which has plenty of capacity for now. What I can observe there is that rustc spikes to 15 GiB once or twice in short succession while compiling just my package alone (after all deps are completed). I'm looking to figure out why that is; ideally I can adapt my code to prevent it from getting quadratically worse as the enums grow larger, or generate a smaller test case for reporting an issue. (Here's the current code.)
I've tried some of the debugging advice I've found around the web, such as https://users.rust-lang.org/t/rustc-memory-usage/55513: RUSTC_LOG=info
, /usr/bin/time -v
, and -Z time-passes
on nightly.
RUSTC_INFO around that time looks like this:
ârustc_mir_transform::ctfe_limit::run_pass
â
ârustc_trait_selection::traits::project::normalize_with_depth_to depth=0, value=context::flags::_::InternalBitFlags
â
ârustc_trait_selection::traits::project::normalize_with_depth_to depth=0, value=<context::flags::ContextBits2 as bitflags::__private::PublicFlags>::Internal
ââârustc_trait_selection::traits::project::project obligation=Obligation(predicate=AliasTy { substs: [context::flags::ContextBits2], def_id: DefId(20:133 ~ bitflags[3ccf]::traits::PublicFlags::Internal) }, depth=0)
â ââârustc_trait_selection::traits::project::normalize_with_depth_to depth=1, value=<context::flags::ContextBits2 as bitflags::__private::PublicFlags>
â âââ
âââ
â
ârustc_mir_transform::ctfe_limit::run_pass
â
ârustc_mir_transform::ctfe_limit::run_pass
â
ârustc_mir_transform::ctfe_limit::run_pass
â
ârustc_mir_transform::ctfe_limit::run_pass
â
ârustc_mir_transform::ctfe_limit::run_pass
â
ârustc_mir_transform::ctfe_limit::run_pass
â
and then continues with more mir passes afterwards.
With -Z time-passes
, the times pause for a while during the memory spike then the next output is:
time: 0.317; rss: 1867MB -> 1893MB ( +26MB) monomorphization_collector_graph_walk
time: 0.018; rss: 1893MB -> 1893MB ( +0MB) partition_and_assert_distinct_symbols
time: 25.103; rss: 2498MB -> 1893MB ( -605MB) generate_crate_metadata
time: 3.186; rss: 1899MB -> 2769MB ( +869MB) codegen_to_LLVM_IR
time: 3.199; rss: 1893MB -> 2769MB ( +876MB) codegen_crate
And all I get from /usr/bin/time -v
is that it reached 12-15 GiB.
It's hard to really tell for sure what's going on from these. Are there better ways to profile the compiler that can better pin down what's going on, or am I overlooking something in one of these tools?