Simple "state" in procedural macro

I am trying to implement a custom derive macro for a 'Component' trait. A component requires all implementing structs to have a const u16 XXCOMPONENT_BITS that identifies what type of component it is. So, when generating the derive proc macro, I need someway to increment a variable representing the bit field value. This is sort of what I am trying to do currently:

use proc_macro::TokenStream;
use quote::{quote, format_ident};
use syn::{parse_macro_input, DeriveInput};
static mut COUNT: u32 = 0;
pub fn component_derive(input: TokenStream) -> TokenStream {
    let input = parse_macro_input!(input as DeriveInput);
    let name = input.ident;
    let concat=format_ident!("{}_{}",name,"COMPONENT_BITS");
    let count=unsafe{COUNT+=1;COUNT};
    let gen = quote! {
        //create constant representing this components bit mask
        impl Component for #name {
            const #concat:u16=#count+=1;
            fn get_component_bits() -> u16; {

I know this shouldn't work, (and it doesn't), but I am not sure how to go about making work. Rust in general seems to discourage global mutable state, but I am not sure of method to achieve this functionality without it.

There is no clean way to handle this.

  • Using static memory (or worse, static mut! Don't use that: use an AtomicUsize or smth like that but never use static mut) may break at any point, since you can't know how and when your proc-macro is called: it could have its global memory refreshed in between invocations, for instance.

  • Accessing the fs may be a tiny bit more robust than static memory, but:

    • some compilation environments could end up sandboxing proc-macros. That would break crates such as sqlx, so I no longer think this may happen as a global thing, but rather, an opt-in thing for dependents. Still, you'd become a crate unusable for that kind of users;

    • Caching: the very invocation of a proc-macro may be cached, so working off the fs could yield unexpected results if, for instance, some proc-macros are called multiple times and others not (on top of concurrency issues as well). That being said, in nightly, and thus, hopefully in future stable Rust, there is / will be the ::proc_macro::tracked_path::path API (yes, it's a mouthful, I hope they change it to tracked::path or something; we don't see ptr::ptr_addr_of! or ptr::NonNullPtr but ptr::addr_of! and ptr::NonNull, for instance). With it, you'll be able to register your interest in a file contents, for hopefully a more cache-friendly behavior. That being said, while that works for an externally-loaded file (like sqlx does), I'm still skeptical of it working properly for a "cached state" approach.

  • Env vars. Mutating those is currently observable in between compiler invocations. While an interesting theoretical quirk of the current approach, this is horrible, I hope nobody uses that.

  • Loosen the "value pre-fetched during Rust's compile-time" to "value pre-fetched during linking time or during life-before-main". For this, crates such as inventory - Rust or linkme - Rust can be quite useful. The caveat is their reduced portability (e.g., inventory does not work off the shelf / well on Wasm).

The more robust solutions

  • but more cumbersome!

The key idea is: handle all your "annotated items" within a single sweep. Thanks to that, no need for global state: a local state within that sweep serves us just fine :slightly_smiling_face:

1. A single proc-macro invocation sees all the annotated items.

This is the approach taken by cxx - Rust, for instance. Choose an inline module, an impl block, or something along those lines, expect that the macro be called on it (or, similarly, just take a function like macro to define your own scope), and have all the potentially annotated items (e.g., your type definitions) be syntactically / lexically present inside that scope:

my_preprocessor! {
    #[derive(Foo)] // <- Foo is not a real macro, it's a syntactical marker for `my_preprocessor to handle `Bar`.
    struct Bar ...

    struct Ignored...

    #[derive(Clone, Foo, Debug)] // <- another one detected!
    struct Quux

// or

mod ... {
    struct Bar ...


This works surprisingly well, but does come with the caveat of requiring that all the annotated stuff be located within a single file.

If you want to support definitions scattered across multiple files, then you'll need to use:

2. A script which scans your code

With a script, you are in control of a single sweep of your codebase, should you implement one. The caveats being having to implement one, and, mainly, not being able to handle macro-generated modules, cfg-gated modules, or other advanced shenanigans. So it's not super versatile either, but it already supports handling definitions scattered across multiple files.

With it, the script can generate the necessary stuff in helper files, to be emitted in the OUT_DIR. If such files, alone, are not able to do all the work, they can, at the very least, provide the results of the global state. A proc-macro could then just expect those files to be present, and emit-include! them.

These solutions are still a bit brittle if the user code does stuff too confusing for the basic code scanning logic, but with some cooperation from the caller / user of your framework, that's something that can be dealt with / managed. Compare that to proc-macros which may have, at some random upgrade of the compiler, inconsistent global state or inconsistent access to the fs, and you end up with a source of problems that neither you nor the caller can do much about.


Thank you for the very informative response!

Another approach that can work is to use some kind of hash or random value that’s extremely unlikely to collide instead of a sequential counter. You could, for example, generate a UUID whenever the proc macro is called. For this to work reliably, though, you’ll need a lot more than 16 bits (UUIDs are 128 bits).


Yep! For the XY problem, yes, you can circumvent the issue with a stateless collision-free (with very high probability) algorithm; and in this instance it does feel like the best approach, but given the name of the thread, I preferred to stick to answering the broader question :+1:


This is actually what I went with in the end. I tried to implement one @Yandros methods and it was a bit too cumbersome for so relatively simple a functionality. Rust has a convenient UUID crate that I utilize elsewhere in my project, so I just use that. It's technically memory inefficient, but in the grand scheme of things, it should be fine.
Thank you all the same though, @Yandros. I might attempt these in the future as an optimization if somehow ever becomes necessary, and as my Rust abilities increase.

I'm not sure I would make such a strong assertion. A UUID is just 16 bytes.

Anyway, an interesting factoid to note is that the Rust compiler itself uses a similar approach for generating names that are unique per build. Function names have a suffix that is generated from a hash identifying some metadata about the source code (I'm not sure exactly what, but I know it changes if you change the code). The same is true for the built-in dynamic typing mechanism underlying Any, i.e. TypeId – the type ID of a type is just a hash which is assumed to be a collision-free pseudorandom label that stands in for a more complex representation of a type.

It should only change if the function name or signature changes or if compilation flags change. Before incremental compilation was implemented it included the crate hash, but including this now would effectively break incremental compilation as no codegen units would be reusable on any change.

1 Like

Yeah, fair enough. As I said, I'm not aware of the specifics of this (I'm not super familiar with low-level internals of rustc); I was merely trying to add another anecdotal data point in favor of the stateless approach.

1 Like

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.