How to build a statically linked rustc?

To speed up compilation, I want to build rustc optimized for my processor. To simplify the process, I do this in a container of the official Rust 1.64.0 Docker image. To actually speed up the compiler, I pass -march=native when building LLVM and the environment varible RUSTFLAGS is set to -C target-cpu=native; my config.toml can be found below. After compiling and installing the stage 2_ compiler with env DESTDIR="$(pwd -P)/install" python3 x.py install, I get a dynamically linked executable. The thread How do I build rustc with statically linked rustlib? from 2016 suggests that a stage 3 compiler will be statically linked. When I set install-stage = 3 in config.toml though, I get this error:

$ git rev-parse HEAD
a55dd71d5fb0ec5a6a3a9e8c27b2127ba491ce52 # tag 1.64.0
$ export RUSTFLAGS='-C target-cpu=native'
$ export DESTDIR="$(pwd -P)/install"
$ time python3 x.py install
[snip]
Copying stage1 rustc from stage1 (x86_64-unknown-linux-gnu -> x86_64-unknown-linux-gnu / x86_64-unknown-linux-gnu)
Assembling stage2 compiler (x86_64-unknown-linux-gnu)
Uplifting stage1 std (x86_64-unknown-linux-gnu -> x86_64-unknown-linux-gnu)
Copying stage2 std from stage1 (x86_64-unknown-linux-gnu -> x86_64-unknown-linux-gnu / x86_64-unknown-linux-gnu)
Uplifting stage1 rustc (x86_64-unknown-linux-gnu -> x86_64-unknown-linux-gnu)
Copying stage2 rustc from stage1 (x86_64-unknown-linux-gnu -> x86_64-unknown-linux-gnu / x86_64-unknown-linux-gnu)
Assembling stage3 compiler (x86_64-unknown-linux-gnu)
thread 'main' panicked at 'fs::read(stamp) failed with No such file or directory (os error 2) ("/home/docker/rust/build/x86_64-unknown-linux-gnu/stage2-rustc/x86_64-unknown-linux-gnu/release/.librustc.stamp")', lib.rs:1395:24
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

How can I fix this problem or is this a bug that needs to be reported?

By the way this error message matches the one in espr-rs/rust-build#17 but it occurred in a different context.

My config.toml:

profile = "user"
cflags = "-march=native"
cxxflags = "-march=native"
install-stage = 3
cargo = "/usr/local/cargo/bin/cargo"
rustc = "/usr/local/cargo/bin/rustc"
rustfmt = "/usr/local/cargo/bin/rustfmt"
cargo-native-static = true
low-priority = true
channel = "stable"

Building a stage 3 compiler is broken, but if it were to be fixed it should produce a compiler identical to the stage 2 compiler.

Rustc can be statically linked against librustc_driver.so libraries by removing these two lines:

Do note that this will result in more disk usage (as the entire compiler has to be duplicated between rustc, rustdoc, clippy, rustfmt, miri and other tools). In addition it doesn't allow loading extern codegen backends (an unstable feature). Getting it fully statically linked is more involved, but is also much less useful as proc-macros are dlopen'ed, which is only possible if libc is dynamically linked.

2 Likes

Note that official builds also apply PGO so you'll probably need to do that too if you want to get actual speedups.

1 Like

Thank you for this hint because it allowed me to finally find out which of 51 Dockerfiles is used for creating the official binaries: the file src/ci/docker/host-x86_64/dist-x86_64-linux/Dockerfile is the only Dockerfile containing the term pgo and this will help me with all kinds of troubleshooting (note the link refers to the commit tagged 1.64.0).

Thank you, this worked. My rustc executable is about 165MB in size in comparison to the 16MB in the official Docker image though. Apparently I am missing some critical options in comparison to the official release binary.

For those interested: my rust directory including build and installation directory was 20GB large at the end.

That size is about right. librustc_driver is 122MB for me, libstd 9.6MB, rustc 2.7MB and libLLVM 100MB. Adding these together gives 234.3MB. After removing unused functions (as static linking allows unlike dynamic linking) about 165MB is not unexpected.

1 Like

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.