Straightforward compile time evaluation

There are several use cases for compile time evaluation of expressions which are not const / deterministic. This has led to multiple specific implementations such as include!, include_str!, include_bytes!, env!, and option_env! being added to std, but they don't satisfy all use cases.

One example of something I think could be easier is tagging a dev binary with the Git branch name. As far as I'm aware, the only ways to do this are either using a proc macro or, but this is not very straightforward in my opinion, and there aren't a ton of docs explaining simple use cases like this for

It isn't an insurmountable issue, but for this use case feels like a fence you need to throw your backpack over, climb over, then pick up the backpack again (if that makes sense); you have to run some code, set an environment variable with the output, then on the other side read the env var with env!, which is still going to be a string which may require reparsing.

Is there a reason why we can't / shouldn't have something like:

let v = eval {

or even just a generic eval! macro?

This would be able to replace all the specific macros in std and provide a straightforward way for other use cases like mine, as well as (in my opinion) being more readable, understandable, declarative, or prone to error (due to not needing to pass values through environment variables or deal with strings).

1 Like

Or you can generate a bit of source code and include! it directly, without any reparsing. That is the recommended way, AFAIK.

I don't understand what your proposed eval would exactly do, but if it can be done via a proc-macro, then that in itself is enough reason not to create a separate feature for it. The whole point of macros is to extend the language in (almost) arbitrary ways without the need for packing every last niche use case into the core language, and without you having to wait months for that feature to land.

1 Like

What do you mean by tagging the binary? What exactly are you trying to do? I have the feeling that we might be able to help you better and more productively if you explained your desired goal instead of directly jumping to feature requests.

I think this is a really good analogy!

I know people have been talking about improving proc macro ergonomics so you can write them in the same crate that uses them.

Combine that with things like proc_macro::tracked_env::var() and proc_macro::tracked_path::path() for telling the compiler when your proc macro depends on a particular environment variable/file (so it knows when to re-run) and we'd get pretty close to what you are looking for.


That in itself would be a huge improvement.

1 Like

I know how to do what I want to do, I'm just interested in why it has to be that way.

By tagging a binary, I mean embed the Git branch into the version so we can track who is using what dev build. E.g. myprogram --version outputs something like v1.2.3-feature_a.

I agree. I don't know enough about that side of things (which is why I'm posting here), but I do agree that if it can be a macro instead of a keyword that would be ideal. My main point is just some generic tool to use so instead of having all those specific implementations in std we can do e.g. eval!(env::var("FOO")) instead.

Just to be clear, I mean something completely generic - not just for env vars / paths. My use case would be using the hypothetical eval! macro / keyword with a block which would run a Git command and extract the branch name, and the entire block would evaluate to a string literal. However, it could be anything else, too. E.g. make an HTTP request and embed the response.

Hi, it's my first here...
If I understand your problem correctly you'd like to have current git branch name compiled into your project as version reference.

How about creating git hook (post-checkout to be exact) , which will set a predefined ENV variable, which then you can consume compile-time. (I think)


Thanks, but my post here is to discuss / the hypothetical eval! macro. I used my Git use case just as a concrete example of why it would be useful to create a generic compile time evaluation interface.

Would this double compile time ? I.e.

  1. compile once

  2. run macro code

  3. get new code

  4. compile #2 time

Would 'cargo check' now also have to first build a binary ?

Wouldn't it create more problems than it solves?
The compile-time evaluation (=execution) of has serious security implications, as you will have to run it sandboxed to prevent arbitrary code execution during compile... Imagine your eval {...} doing wget and compiling into your binaries some malicious source you have no control over, not even being able to do cursory review. How would you ever be able to give any guarantees that you didn't included a malware...
Having 2 step process via static file/ENV has advantages, not only drawbacks.

I don't think so. The dependency structure of the code doesn't change.

proc-macros can introduce new structs/enums/traits/fns/impls right? I don't see how you avoid compiling a 2nd time.

Based on this logic, how do you avoid compiling a 2nd time currently? Or, I just don't see why this is relevant. The proc-macro part of the crate would need to be compiled first, then it would be run (generating the rest of the code), then the rest would be compiled. Or are you planning to use items in the proc-macro itself that the proc-macro itself generated? I don't think that's possible, and even if it is, it shouldn't be allowed.

No. You can already do all this with build scripts and procedural macros.


Normally, we have:

crate c0: defines procedural macro foo

crate c1: uses procedural macro foo

  • we make a change to c1, hit recompile -- because c0 did not change, it does not need to be recompiled, it uses old binary to call foo to expand in c1

  • if we ever change c0, of course, then we have to recompile c0


crate d: defines procedural macro foo, uses procedural macro foo

  • we make a change to d; now it seems we have to compile it twice. once because we don't know if the def of foo changed, then we run it to expand the foo's

I think because proc-macros are already specially known to the compiler, it could just hash the proc-macros separately in order to detect whether they changed. Anyway, doesn't the same problem exist today w.r.t. declarative macros?

Disclaimer: I am NOT a rustc engineer. This is all from intuition.

I would agree with you that rustc could, in theory, take "crate d", split it into

"crate d proc macro" and "crate d everything"

then check to see if "crate d proc macro" has changed since last time

However, at this point, it's basically generating 2 crates behind the scene.

I think declarative macros work differently in that being a separate DSL rather than Rust code, it can just be run during parsing.