Complex build advice

Hey there folks! I am looking for advice on how to best structure a project with Cargo.

I have a system where things work, but they're not as good as I'd like them to be. After reading @matklad's post about Rust 2021 and the compiler's build system, I grew kinda annoyed with my setup. Why can't my project be better too? So, I've had some ideas, but I was interested in hearing if anyone else had thoughts. Here's a summary of my situation:

I am working on an embedded operating system. The end goal is to produce a couple of images to flash onto some devices. This means everything is cross-compiled, sometimes to two different targets, as well as needing to pass custom linker scripts to compile correctly. In order to produce everything that goes on the image, I have to build a number of targets. Most are packages that end up with a binary. Some are library dependencies that are shared among the binary targets.

Right now, to make this work, I have a Cargo workspace with almost everything in it, and then an xtask that orchestrates the build. It still invokes cargo build at the end of the day, but with a number of flags, including some RUSTFLAGS to enable the linker scripts. This works, and is fine, but it doesn't handle incremental builds well at all, and so they're slow. Additionally, since it doesn't use a regular old cargo build to build, tooling like rust-analyzer doesn't work as well as it otherwise might.

Given that I need to do post-processing, I'll need an xtask to build the final image for sure. But it would be nice to just do "usual" development with cargo build, and to take advantage of incremental builds better.

How would you structure a project like this? Any experiences? I have been messing with various workspace options, but seems like things often end up awkward.

EDIT: part of why I decided to post this is that I am not sure there's a lot of good explanations of layouts of bigger projects with Cargo. And the devil is always in the details. Maybe I should make up an example repo for this... originally I wanted to keep it high level because I thought that would be easier, but maybe the details do matter.


What prevents incremental builds from working?

Is it because you're changing RUSTFLAGS? If so, perhaps you could modify cargo itself to pass the flags you require in a more targeted way without invalidating the whole cache?

1 Like

Yeah, RUSTFLAGS changing, as well as some other files that change per-image; I wanted to try and keep it high level rather than getting into some weeds that are probably irrelevant, I think I can fix up some of those issues independent of the overall structure of the build.

Like, you mean patch cargo? A custom cargo build is probably too much work, and moves me more into a bespoke path, when I'm trying to get back to the usual approach.

This setup is a bit hard to follow without some more details. Can you maybe provide an example structure which demonstrates the problem?

Correct me if I'm wrong, but it sounds like you've run into the following friction points:

  1. You need more control over linking than what cargo provides
  2. A normal build script only runs before your crate is compiled, but you need to run custom code after the main crate is compiled and linked
  3. The way you build may be non-deterministic (e.g. RUSTFLAGS changing), leading to excessive cache invalidation and triggering frequent rebuilds
  4. Things get complicated because you are targeting (potentially multiple) non-host platforms
  5. Because you're running arbitrary code and changing which #[cfg]'s are enabled, the IDE has a hard time compiling the crate so it can do analysis and provide nice things like auto-complete and refactorings

How complex is your xtask step? If it's relatively simple, we could maybe develop a standardised way of doing this sort of post-processing that IDEs, non-cargo build tools, and a native cargo build can understand.

From what I've seen in non-Rust embedded projects, you'll often write your own custom code (e.g. in a Makefile) to make sure the compiled artefacts are in a form that your particular target can handle... I can't say whether that shows this is hard due to inherent complexity (the problem is hard) or accidental complexity (my tools are too complicated/unergonomic or I don't know there's a better way), though.

1 Like

In regards to conveying your build to rust-analyzer, in rust analyzers project_model crate, in addition to being able to derive project layout from cargo via cargo metadata, it also has a JSON based format rust-project.json, using that you may be able to capture the aspects of the build process which cargo is not aware of.


Are incremental builds handled poorly due to different flags being in use for each target? Maybe you could give it a different target directory for each instance of flags to the compiler? I don't know for certain, but it might make incremental compilation work. It would also allow each target to compile in parallel.

I think making the crates work with cargo build would depend on the details.


Yes, I've meant patching Cargo. It's actually pretty easy, since it can be built and used as a standalone executable.

Control over the linker and build post-processing are common feature requests, so if you solved these problems for your situation, maybe this could become an official Cargo feature.

1 Like

Thanks folks! All replies inline :slight_smile:

Yeah, that may be helpful :slight_smile: Here:

  • We have two end projects, one for thumbv7em-none-eabi (called binary-v7) and one for thumbv8m.main-none-eabihf (called binary-v8)

  • Both of these depend on a library, kernel.

  • There is also a binary package program.

Each of these two projects would need to build:

  • Their program
  • kernel
  • program for their architecture

And would need post-processing for a real "build."

This setup works (and is better than my real code because of the lack of env var stuff) but has some weirdness to it:

  1. cargo build in the root doesn't work. Should it? I'm not 100% sure.

  2. cargo build does work in the two binary-* packages, which is nice, but won't build program

  3. cargo build doesn't work in program because it will try to compile for the host.

    • This could be fixed by picking a random target and creating a .cargo/config for it, I guess
    • I am not sure that this is really solvable since it's going to be built for two different architectures by two different builds
    • In the real system there are a bunch of programs, and so them each needing to copy around this .cargo/config at random is annoying and slightly confusing when you're getting started
    • This also means that tools like rust-analyzer get confused; I have red squigglies about found duplicate lang item panic_impl because it is building it for test which brings in std
  4. There are link.xs in each binary's subdirectories, but its seems like it needs the one at the root only. This isn't a problem, really, but it is a bit weird. I would expect it to look at the package's root directory, not the workspace root.

    • In my real system, a build script writes these into OUT_DIR, because their structure depends on the rest of the build.
  5. Because a workspace means there's a single Cargo.lock, this means that we get the union of all features; if I add a program2 in the future that shares a dependency with program1 with different features, both will get both, and that's a bit awkward.

    • This may not be a problem in my case, but it is a bit weird! This is the semantic that led me to look further into all of this in the first place, actually.

To try out an alternative build for #5, I tried making binary-v7 and binary-v8 into their own workspaces. Conceptually, that's what they probably should be; while they share programs, they're their own whole builds. However, when trying to add the other programs as members, I would get

error: workspace member '../kernel' is not hierarchically below the workspace root 'Cargo.toml'

(Or similar, this is from memory)

This makes it impossible since I want these crates to be in two workspaces at the same time, so I guess that strategy is just out.

Here's a second commit that adds an xtask to coordinate the build:

This does seem to work pretty well:

build-example on πŸ“™ master via πŸ¦€ v1.48.0-nightly
❯ cargo xtask binary-v7
   Compiling xtask v0.1.0 (C:\Users\steve\Documents\build-example\xtask)
    Finished dev [unoptimized + debuginfo] target(s) in 0.43s
     Running `target\debug\xtask.exe binary-v7`
   Compiling kernel v0.1.0 (C:\Users\steve\Documents\build-example\kernel)
   Compiling binary-v7 v0.1.0 (C:\Users\steve\Documents\build-example\binary-v7)
    Finished dev [unoptimized + debuginfo] target(s) in 0.17s
   Compiling program v0.1.0 (C:\Users\steve\Documents\build-example\program)
    Finished dev [unoptimized + debuginfo] target(s) in 0.08s
some post-processing!
build-example on πŸ“™ master via πŸ¦€ v1.48.0-nightly
❯ cargo xtask binary-v7
    Finished dev [unoptimized + debuginfo] target(s) in 0.02s
     Running `target\debug\xtask.exe binary-v7`
    Finished dev [unoptimized + debuginfo] target(s) in 0.02s
    Finished dev [unoptimized + debuginfo] target(s) in 0.01s
some post-processing!
build-example on πŸ“™ master via πŸ¦€ v1.48.0-nightly
❯ cargo xtask binary-v8
    Finished dev [unoptimized + debuginfo] target(s) in 0.02s
     Running `target\debug\xtask.exe binary-v8`
   Compiling kernel v0.1.0 (C:\Users\steve\Documents\build-example\kernel)
   Compiling binary-v8 v0.1.0 (C:\Users\steve\Documents\build-example\binary-v8)
    Finished dev [unoptimized + debuginfo] target(s) in 0.17s
   Compiling program v0.1.0 (C:\Users\steve\Documents\build-example\program)
    Finished dev [unoptimized + debuginfo] target(s) in 0.09s
some post-processing!
build-example on πŸ“™ master via πŸ¦€ v1.48.0-nightly
❯ cargo xtask binary-v8
    Finished dev [unoptimized + debuginfo] target(s) in 0.02s
     Running `target\debug\xtask.exe binary-v8`
    Finished dev [unoptimized + debuginfo] target(s) in 0.02s
    Finished dev [unoptimized + debuginfo] target(s) in 0.01s
some post-processing!

So, I am guessing in my real system that the env var parts of the build are really messing this up; this does seem to do the right thing, and so maybe this is the right strategy overall, so I should focus on those things, and not change my overall approach.

Yes, I think this is a good summary.

Yeah; part of the reason I went with the xtask approach here was that long ago, when we were talking about cargo tasks, it was gonna be a polyfill for those; I am guessing if that RFC is ever accepted (and well, at this point, written...) this would make all of this nicer, because it would be more standard.

It's a combination of both. The issue with Makefiles is that well, they're not portable. I'm on Windows, and 99% of this stuff Just Works really really nicely, but that means that "just use Make" isn't a real option. I love that Rust's tooling is so cross-platform, and this is just a final pain point. The problem is hard, but it's mostly about fiddly details. I think we'll end up getting there, it's just gonna take some time :slight_smile:

Thank you! I'll look into this.

I was using different directories in target for each project, but given the above, I think that the root issues are not actually with the fundamental setup, and more with the incidental details of my build.

Yeah this is true, it's just that this situation is not that painful yet. :slight_smile:

1 Like

Please forgive me if this reply sounds like "have you tried changing your distro" :smile:

I think this task script is similar in spirit to the problems I described with a while ago: both are basically small Rust programs, written in an ad-hoc fashion, to do some work before or after a "simple" build done by cargo/rustc.

Your xtask example looks very similar to a Makefile. I want to suggest, "have you tried Meson as your build driver", because it is more portable than Makefiles, and supports cross-compilation, and well-defined intermediate build stages, and multiple targets with different options, etc. etc. etc., but you won't find it a seamless fit with Cargo right now. I think it may work fine for you if you let it run Cargo as a black box that produces binaries (which is, for example, what GNOME programs that use Meson+Cargo currently do).

Hehe, it's cool, I get it. It's not an option for a few reasons; but regardless of my specific situation, I'm trying to figure out how good I can make this without throwing it all out and doing something else. If we never push Cargo's boundaries, it will never grow into being a good fit for these projects.


For rust-analyzer, take a look at the checkOnSave family of options. IIRC, we by default pass β€”all-targets, which causes building the test.

There also checOnSave.overrideCommand, which you can use to use an xtask for checking, which will give you full control.


Thanks, the example project helps a lot with understanding the issue. Also glad to see that narrowing it down to a minimal, self-contained example helped you isolate some of your issues; I find that technique to help out a lot.

One problem I noticed with your example is that you built binary-v8 twice in your xtask, instead of program. I applied the following to fix that:

diff --git a/xtask/src/ b/xtask/src/
index d8ec4c3..f400156 100644
--- a/xtask/src/
+++ b/xtask/src/
@@ -44,7 +44,7 @@ fn v8() {
-        .current_dir("binary-v8");
+        .current_dir("program");
     command.status().expect("failed to execute process");
     // put it all together

To try out an alternative build for #5, I tried making binary-v7 and binary-v8 into their own workspaces. Conceptually, that's what they probably should be; while they share programs, they're their own whole builds. However, when trying to add the other programs as members, I would get [an error]

You don't need to be using workspaces for these at all; you can still use path dependencies between crates which are not in the same workspace. You do have to do a little bit of work to not accidentally get the crates into the top-level workspace, which the xtask still needs to be in. In this example I also set the target dir to the top level just to make it easier to find all of the binaries:

diff --git a/Cargo.toml b/Cargo.toml
index bdb2d02..4ed7beb 100644
--- a/Cargo.toml
+++ b/Cargo.toml
@@ -1,8 +1,10 @@
 members = [
+    "xtask",
+exclude = [
-    "xtask",
diff --git a/binary-v7/.cargo/config b/binary-v7/.cargo/config
index 839963f..1b319e1 100644
--- a/binary-v7/.cargo/config
+++ b/binary-v7/.cargo/config
@@ -5,3 +5,4 @@ rustflags = [
 target = "thumbv7em-none-eabi"
+target-dir = "../target"
diff --git a/binary-v8/.cargo/config b/binary-v8/.cargo/config
index 25722ca..602970f 100644
--- a/binary-v8/.cargo/config
+++ b/binary-v8/.cargo/config
@@ -5,3 +5,4 @@ rustflags = [
 target = "thumbv8m.main-none-eabihf"
+target-dir = "../target"

It seems to me that the intent behind workspaces is mostly for large applications which are factored into a number of small crates, or large facade crates which are broken up into a number of smaller implementation crates, where you want to be able to work on one of the sub-crates at a time but are planning ultimately on linking them all together so want to be working with a consistent Cargo.lock.

If you don't want them to share features and dependency resolution, and thus a Cargo.lock, than they should be in different workspaces, but you can use a top-level cargo xtask (or preferred automation solution) to make it easier to kick off builds of everything.

I suppose that there is a use case in which you might want certain groups of programs to share dependency resolution, while others not. For instance, if you are doing some kind of custom linking step, so you want all of your v7 programs to share one set of dependencies, and all of you v8 programs to share another, you could have your v7 programs in one workspace, and v8 programs in another. For something like program which shares code but might need to be in both builds, you might need to just make it a library crate, and create a binary crate which imports it in each of the workspaces.

A couple of possible layouts for that case:

    Cargo.toml: [workspace] exclude = [ "v7", "v8", "program", "kernel" ]
        Cargo.toml: [workspace]
            Cargo.toml: [dependencies] kernel = { path = "../../kernel" }
            Cargo.toml: [dependencies] program = { path = "../../program" }
        # ... like above ...

Or alternatively:

        Cargo.toml: [workspace] [package]
            Cargo.toml: [dependencies] program = { path = "../../program" }
    # ... etc ...
1 Like

Yeah, I had done so, but not in a publish-able state :slight_smile: It's good for sure.

whoops! copy/paste error there yep. Thanks!

The issue is that these are binaries, not libraries, so they can't be depended upon.

Thank you!

I went looking for a C++ build system last week, and found GN from Google has Rust support. Hope this suggestion isn't unwelcome here. :sweat_smile:

I think it checks some of the boxes above, such as cross-compilation and incremental builds from a Windows host. Not sure about the post-processing bit (action target maybe?).

Presentation with some background:

Probably not related but bottlerocket is using cargo in a very weird way. maybe can get some inspiration from it.


Circling back to this, I did find one other interesting thing I figured I'd share: my real project here is more like ~30 crates, and I did an experiment where I moved everything from one big workspace to keeping them as separate crates. The workspace version takes 4x longer to compile. The reason for this seems to be that every crate ends up with the worst-case time; many of the packages have a 0.03 to 0.05 second rebuild time, but one or two have way more dependencies and take half a second. With the big workspace, everything takes half a second. I haven't technically confirmed it's the number of dependencies that's the issue but the "one big workspace" version makes 5x the syscalls of the one that's broken up.

1 Like

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.