`#[cfg()]` w/ arbitrary expressions

In C/C++, you can do conditional compilation like this:

    // Some code here
#elif DEFINED_VALUE <= 90
    // Some other code here
   // Some other code

In Rust, configuration options can be set to be some key value pair, but AFAICT, there is no way to do evaluate an expression like DEFINED_VALUE <= 90 within a #[cfg()] block for conditional compilation. Is there a way to allow for a range of values within cfg?

1 Like

Looks like only equality (=) can be used: Attributes

I don't see another crate that allows other expressions either, although I may be missing something.

You can have a build script evaluate whatever condition you want, and produce simpler cfgs for your code to use. If-else chains can be handled with cfg-if.


The relevant nonterminal is actually MetaItemInner which can be an expression, as we’re inside the "argument list" of an attribute. AFAICS #[cfg(foo < 42)] is permitted by the attribute syntax, but rejected by the cfg attribute itself. The mini-language supported by cfg is defined by the ConfigurationPredicate production.

1 Like

Do you have a more specific example of what you're trying to do?

In C, this kind of pattern is often used for various compile-time programming and const evaluation, and you don't need this kind of hack in Rust, which has proper const generics (although they are not as powerful as in C++ yet). Another common case is checking for compiler versions or language features, and neither of those are necessary in Rust. Checking for OS versions or system library versions is usually done via build scripts, as @kpreid said.

Note that cfg keys can be multivalued, e.g. both cfg(target_family = "unix") and cfg(target_family = "wasm") can be true simultaneously (this is the case for wasm32-wasi). So instead of a test like #if __cplusplus >= 2020'02L && __cplusplus < 2023'02L, Rust would prefer writing that test like #[cfg(all(has_cplusplus = "2020.02", not(has_cplusplus = "2023.02")))] — instead of providing a monotonically increasing constant, actually directly encode the boolean tests that the implementation should be asking.

Or even better yet, directly encode these kinds of monotonically increasing feature test macros into package semver constraints.


I'm using bindgen to generate FFI bindings to a C library. This C library is configurable based off of defines you provide to the library. I would like to conditionally compile some code that is reliant off of how the c library is configured. More specifically, sometimes a symbol is available and sometimes it is not (based off the value of this define).

Ideally, I'd like to use something like [cfg(accessible(path::to::generated::binding))], but that is not stable, and doesnlt look to be stabilized anytime soon

I think the easiest path is to read the values of defines in a build script, either from some external config file or from custom environment variables, and then set custom cfg flags. You'd have to manually translate range checks in the original code into cfg flags or key-value pairs, but other than that the process is straightforward.

I'm actually doing a very similar thing atm, so the problem space is quite fresh in my mind. The one I'm linking isn't configurable, but details mean that it must be dynamically linked and the Rust bindings cannot control what version of the lib is linked, so it's quite desirable for the Rust bindings to support multiple lib versions. Also, while it could theoretically use bindgen at build time, I'd much rather not. (The headers are actually straightforward enough that it's almost practical to transform them with just a series of regex replaces…)

First and importantly, note that if you use the package.links key (and you absolutely should if you ever export nonmangled symbols rather than only ever import them), your buildscript can be overridden. Anything you do should ideally be possible to specify declaratively in such an override, as when an override is used, the buildscript is not run at all. (If your buildscript would actually build anything, pointing to an externally previously built artifact.)

So for example, if I do decide to have the sys crate simultaneously support multiple versions, the package configuration for #define MYLIB_VERSION 0x0002'02'21 (16:8:8 BCD product.major.minor) might look like:

rustc-link-lib = ["mylib"]
rustc-link-search = ["/path/to/mylib/bin"]
rustc-cfg = [
    # skip prerelease minor versions 00...02
    # ...
bin = "/path/to/mylib/bin"
doc = "/path/to/mylib/doc"
inc = "/path/to/mylib/inc"
version = "00020221"

Any items added in 2.02.17 would be gated by #[cfg(mylib_version_minor = "17")] and an understanding that this is a compatibility bound, not an equality bound, just like cargo dependencies. For any packages downstream of mylib-sys to also be mylib version independent, they would need to regenerate the cfg from the DEP_MYLIB_VERSION env var in their own buildscript.

A bit annoying to write out like that? Definitely; I'm strongly considering not doing this and instead tying lib version to package version (it gets checked by the lib at runtime to be compatible for ABI safety, thankfully) and telling the root bin to add a = version constraint on the sys package instead (they need to handle providing packaging of the dylib anyway anyway for reasons).

Also fun is that what C libraries can consider API compatible does not mean Rust considers it compatible, even with the minimally translated bindings and the library providing ABI compatibility guarantees. Namely, adding fields to a struct is source compatible in C (although the fields will stay uninitialized) and this can even be made ABI compatible[1] if such a struct is only ever passed by pointer and with some indication of what declaration version it is (commonly via the first field being set to the size of the struct). So either -sys libraries binding to libraries playing such tricks need to inject #[non_exhaustive] on any such types or loudly disclaimer that it's following C API compatibility rules, not the Rust ones.

  1. Having the same named type defined two different ways can easily cause a type based strict aliasing issue per the standard. I don't claim to know whether such trickery is allowable per pure standard C, only that it's compatible if we treat the dylib ABI as an opaque boundary which launders TBAA on either side and allows them to view the bytes communicated over ABI as different types. (Rust doesn't do TBAA and has untyped memory anyway, making such reasoning much simpler.) ↩︎

1 Like

This is exactly what i do already, but there are places where it is still useful or more ergonomic to have range checks. As noted by @CAD97, it is quite annoying to need to consider the cfg as something like #[cfg(mylib_version_minor = "17")] to be a lower compatibility bound instead of an equality bound. likewise having something like #[cfg(not(mylib_version_minor = "35"))] being an upper compatibility bound is just not very readable. I guess cfgs could be renamed to be something like mylib_version_minor_supported or somthing might help a little bit, but its not that much better.