Trying to compile 122 MB worth of generated code

In my attempts to create a WebAssembly to Rust transpiler/compiler called wasm2rs, I've hit a problem caused by the Linux OOM killer and long compilation times caused by rustc. To test that my project works, I've attempted to translate a wasm32-wasip1 build of the Python 3.12 interpreter (from VMware Labs' webassembly-language-runtimes repository) into Rust code, resulting in a monster that repeatedly crashes rust-analyzer.

I will include some of my initial struggles here in case someone else encounters a similar problem.

Initial Problems

Macros Eat Memory

My initial design for wasm2rs was a CLI tool that generated a Rust file containing a macro_rules! containing the generated code, with the intention of allowing the user of the macro to decide where runtime functions (e.g. math calculations, allocating memory, and providing WebAssembly imports) are located. A macro containing a million lines of code being instantiated quickly led to the Linux OOM killer stopping compilation as it exceeded the 5 GB's worth of available RAM (on WSL2 running on my somewhat old Windows 10 PC).

Using RUSTC_LOG=info allowed me to determine that macros were the problem, which I circumvented by moving the Rust code corresponding to WebAssembly functions out of the macro and into a separate file along with the ::core::include! macro. So in short, avoid mixing large volumes of auto-generated code and macro rules.

Large Byte Strings

To model WebAssembly data segments, I emitted Rust constants containing byte strings, for example:

const _DATA_11: &'static [u8] = b"\x10\x00\x00\x00\x80\x90H\x00\n";

Unfortunately, when there is around 15 MBs worth of byte strings, the Rust compiler doesn't like it. I fixed this problem by moving the contents of these excessively large byte strings into separate files, and using the ::core::include_bytes! macro.

Long Compile Times

Once I was able to get the Rust compiler past the initial parts of its type checking phase and increasing the WSL2 RAM limit to 9 GB, I phased a new problem: my patience. According to the logs generated by rustc, it was able to get past type checking (yay!) but now my screen was getting filled with messages all of one type: rustc_trait_selection::traits::normalize::normalize_with_depth_to. I gave up on waiting for it to compile after an hour. To be fair however, this was because I invoked Cargo with --jobs 1 just to keeping RAM usage down

Bad Design Decisions?

The above might be the result of the sheer amount of calls to generic functions that the generated code makes to helper functions I wrote like wasm2rs_rt_math::i32_trunc_f32_s or wasm2rs_rt_memory::i32_load, which take quite a number of type parameters which could slow down compilation.

Aside from that, the other problem I see is a single function _elem_0 that is 495 KB characters long and returns a single [FuncRef; 5902], a 94,432 byte sized value that would surely cause problems at runtime, and also might slow down code generation.

Other Things I've Tried

  • codegen-units set to both 8192 and 1, the former was to try and speed up compile times, for now I'm sticking to 1. I do not know what affect this has on memory usage or compile times.
  • debug = "none": I don't think this affects compile time or memory usage too much, but it does mean my target folder doesn't blow up.
  • lto = "off": I do not believe that this currently affects me, because I still have not gotten past rustc's trait_selection step or whatever it's called.

A Possible Workaround?

Right now, I am considering removing (almost) all generic code, using macro_rules! instead of type parameters to generate any required runtime support functions. This might get me past trait_selection only to get Linux OOM killed when LLVM or the linker tries to consume my generated code.

If anyone has any idea how to somehow get rustc to accept this, it would be appreciated.

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.