Why do we need procedural macro separation on a crate level?

Why do we need procedural macro separation on a crate level?

I have some questions about the overall structure of proc-macro and proc-macro2.


I want to be clear about two things:

  1. This topic seems to dive deep into the compiler, so I'm sure there's more that I don't know about.
  2. I believe I understand why there the difference exists (at least currently circa 2024). The source I read before I write this an early (if not the first) RFC about procedural macro crates back in 2016:

Also, I have also seen other posts about this issue on previous discussions, especially this one, which links to the same RFC and other good discussions and information about this topic.

Here is an excerpt from the RFC that defines this difference in crates:

We introduce a special configuration option: #[cfg(proc_macro)]. Items with this configuration are not macros themselves but are compiled only for macro uses.

If a crate is a proc-macro crate, then the proc_macro cfg variable is true for the whole crate. Initially it will be false for all other crates. This has the effect of partitioning crates into macro- defining and non-macro defining crates. In the future, I hope we can relax these restrictions so that macro and non-macro code can live in the same crate.

From this, using #[cfg(proc-macro)] is ultimately what allows a function to "become" a procedural macro (along with its procedural macro type attribute).

However, this feels like patch-work solution, which is hinted in an even earlier RFC. The goal was to make declarative macros and procedural macro both be "first-class citizen[s] in the Rust module system".

Back to the question

This leads me back to the original question:
Why do we need procedural macro separation on a crate level?
What is stopping proc-macro2 from being a first-class citizen?

I don't know the fine details, but IIRC it had something to do with name resolution of the proc macros themselves.

A proc macro has to be compiled for the host such that rustc can dlopen them, but non-proc macro crates have to be compiled for the target such that they can be linked into the target executable or dynamic library. When host can be different from the target, this implies that the proc macro and non-proc macro definitions have to be in separate crates.


Or that you just need to build crate two times: which is exactly what is already being done for build dependencies should build dependency also be regular dependency. Is not the requirement is really about the fact that it is easier to require developers to create new crates rather than invent something to allow crate to be simultaneously proc-macro crate and different library type and also invent something to break dependency cycle (if crate depends on itself then to build the crate you need to have alredy built it)? Both should be doable, but not worth the extra work if solution “just put proc macros into a separate crate” is good enough.

This makes me think that despite the ±50% runtime performance hit, compiling proc-macros to WASM regardless of the actual target arch, and then just having rustc use that compiled WASM, starts to make more sense. It might even be able to do away with the separate crate obligation, assuming compiling to, and using, WASM, can be an unobservable implementation detail. Which it might well be when using WASI.

And that's before considering other potential benefits.

It's "good enough" in the sense that on the technical level it works.
But the moment when it started grating is long since in the rear view mirror i.e. it's pretty human-hostile because it forces unnatural architectures on Rust projects. For example, if you have a library that could in a perfect universe just contain a proc-macro, and then also have the library provide abstractions using that macro, and you also want to offer it to the consumer in the form of a single crate, suddenly you don't have 1 crate, but a workspace of at least 3 crates: the core library crate, the proc-macro crate, and a façade crate to provide the unified API.
And that's a lot of incidental complexity (and maintenance work!) for what is conceptually dead-simple.

In other areas Rust is actively human-friendly, enlarging the contrast all the more.

As to the maintenance point, it's happened more than once to me that I forgot to update 1 of the 3 Cargo.toml files in just the right way before pushing a release, forcing corrective releases right after. That's fun, and useful, for exactly nobody.
Now one might be tempted to argue "just be more careful", to which there is some merit.
But I'll immediately counter that with the example of manual memory management in C. Literal decades of "just be more careful" have not yielded the desired results (not even close in fact), because it's simply not possible to be perfect every time as a human being.


Interesting idea, but I think there's more to this. This only issue I could think of is what happens if the host cannot run WASM themselves. The host would implicitly need to depend on WASM support, wouldn't they? On the flip side, if the host compiles proc-macros targeting their own platform, then its expected to work out-of-the-box.

I've had issues similar to this in the past, and I find it especially annoying during the first phases of development on a new project. Since entire implementations or code structure can change at the beginning, managing crate dependencies within two separate crates seems tedious.

Suppose we do have a crate system that allows proc-macros and regular Rust code to be developed in the same crate. How would we design it? I asked a similar question of this post on the r/rust Reddit page, and user u/Eh2406 gave an additional issue that would need to be solved. I recommend reading their full argument. For my reply, I'll just include a snippet of what they said:

If you have a crate that is both a library and a prc macro
we need a way to express a bunch of details that fall out 
naturally from them being separate crates. Does the macro 
depend on library or the other way around? There are macros
that rely on libraries, and other macros that are used to
construct libraries. So neither choice is obviously correct. 
But they need to be building some order, so they cannot be 
mutually recursive. Similarly, for each dependency is it required 
by the macro or library?

It continues, but it raises a question: what is the smallest level of separation between macro code and regular code we can achieve? Obviously, we know that crate type separation is sufficient and works on a technical level. Can we make this separation smaller as in just a module separation?

I don't have the answer to this, and but it seems important. If we were to have a crate system that allows proc-macros and regular Rust code to be developed in the same crate, then I think solutions to this issue would need to be addressed.

I came up with a concept of my own, which I'm sure is not perfect and I'm sure it would need some work, but I thought I'd just include my brainstorm about this.

A Concept

The Concept: proc-macro modules

I wrote some pseudo-rust code to demonstrate one way this may get solved. What if we had something like this:

//! "Regular" Binary Rust Crate

/// Imports for the binary
use std::collections::HashMap;

/// This attribute is just a placeholder for whatever the attribute would be
mod helpful_proc_macros {
    /// Proc-macro development would be enabled in this environment
    /// Effectively a clean "proc macro" environment
    /// The only thing you can access publicly outside from this module are proc-macros
    use proc_macro::TokenStream;
    pub fn super_helpful_macro(input: TokenStream) -> TokenStream {

fn main() {
    let var = HashMap::new();
    // does our helpful thing
    helpful_proc_macros::super_helpful_macro!(var + 1);

    println!("{}", var)

The code itself is not too important. The idea is at the diving line between a proc-macro module and the library it is within. Do we allow types outside the proc-macro module into the proc-macro module? If so, what happens if we use that macro outside the module in an implementation for the type the module is using?

My own attempt at answering this problem is by making imports within a proc-macro module from the parent module allowed only if the parent module is a proc-macro module too. This feels like it changes a problem of "red crate blue crate" into an issue of "red module blue module" but we are at least in the same crate and depend on the same Cargo.toml file.

I would love to hear other proposals to this issue.

1 Like

My RFC Nested Cargo packages would improve this situation by allowing you to publish the three crates as one package to crates.io, with the nested packages not subject to version numbering.

(I don't think of this as a better solution than being able to define proc-macros in the same crate, just a pragmatic one that builds on systems that already exist.)


I guess due to existence of e.g. Wasmer, every host that can run Rust compiler will be able to run WASM.


Wasmer has three backends, all of which only supports x86_64 and arm64 afaict. The only options for more exotic architectures supported by rustc would be either a wasm interpreter or something that has both an LLVM and GCC backend and additionally doesn't have any architecture specific implementation details. (the llvm backend of wasmer fails this last requirement)


Well, fast forward far enough into the future and I fully expect wasmer to support all arches that rust supports. Think BSc or MSc projects, or just some dev doing it just for the hell of it :slight_smile:

The real (and currently unanswerable) question is how far to into the future we would have to fast-forward.

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.