Varying Generic Parameters with features

No worries! I appreciate the effort :+1:

I originally had a long draft reply and decided to toss it out because it wasn't really adding anything to the conversation. But now I'm back because I think it does add something since more workarounds are still being suggested.

I feel like any time a feature gate changes a public interface, you are asking for trouble. IMHO, it would be much better to keep the interface static (always have the same number of generic parameters) and the only thing that you need to conditionally compile are the various permutations of impl blocks:

use std::marker::PhantomData;

#[derive(Default)]
// `S` is optional, so we'll default to something. In this case, unit.
struct Container<T, S=()> {
    element: T,

    // `S` just has to be used in all cases. Here we defined it with a ZST.
    _phantom_s: PhantomData<S>,

    #[cfg(feature = "some_feature")]
    second_element: S,
}

impl<T: Copy> Container<T> {
    fn get_element(&self) -> T {
        self.element
    }
}

#[cfg(feature = "some_feature")]
impl<_UnusedT, S: Copy> Container<_UnusedT, S> {
    fn get_second_element(&self) -> S {
        self.second_element
    }
}

This type checks with --all-features and doesn't change the interface. This is especially important for pub types. Of course, it matters a lot how these conditional methods are called. You're just going to end up with cfg attributes sprinkled around the module. This is basically what @semicoleon advocated last week and I think it deserves more attention. It's just least error-prone way to do it, AFAICT.

I know this sample code is not fully representative of the actual issue, but you can probably get pretty far just by breaking the implementation into multiple impl blocks cleanly delineated by which parameters are completely unused. E.g. _UnusedT is just a parameter name, it can be named anything. This uses names to tell the reader that it can be ignored, and it has no bounds so that any type fits. [1]


  1. Although it may need a ?Sized bound in rare cases. ↩︎

1 Like

Yes I thought about this statement as well. In my case an added feature can result in completely new functionality of the library. In my opinion, users should be aware that they have just introduced a new feature since for my library to work they need to specify behaviour anyway (mostly done by implementing a trait and passing the corresponding type).

I agree that I might be able to structure my code differently. However, my previous argument still holds in that the amount of code I would be required to write would scale up by 2^N which is course not manageable when more features are introduced.

The issue is that managing features is out of the user's control. You can imagine any number of scenarios where it falls apart. My crate depends on yours, but it only works when no features are enabled (because I didn't know about the fact that setting features changes the interface, say). Then Alice comes along and depends on both my crate and yours but enables the feature that breaks my crate. :expressionless:

The worst of it is that cargo features are additive, and they can't be "undone" in a Cargo manifest. If it's enabled anywhere in the dependency tree, it's enabled for everyone.

Can you elaborate on that a bit? The examples I've seen so far only consider the parameters in isolation. There is for sure some theoretic combinatoric explosion, but I don't really see it, yet. With N optional parameters, you have to deal with the various combinations both with and without feature gating the type.

1 Like

The previous comments have sparked a new idea inside my head.
What if I can emulate an impl<...> block by already working syntax just as in the example below.

// This code will work
struct Container<A, #[cfg(feature = "B")] B> {
    t1: A,
    #[cfg(feature = "B")]
    t2: B,
}

fn do_something<
    A,
    #[cfg(feature = "B")]
    B: std::fmt::Debug
>(t1: A, #[cfg(feature = "B")] t2: B) -> A
{
    #[cfg(feature = "B")]
    println!("{:?}", t2);
    t1
}

fn main() {
    let container = Container {
        t1: 1_u8,
        #[cfg(feature = "B")]
        t2: 10.0_f64,
    };

    do_something(
        container.t1,
        #[cfg(feature = "controller")]
        container.t2,
    );
}

This would require me to rewrite large portions of my code-base but I might just be able to keep the modularity. This would require a more functional approach to my problems and I might run into problems where I might need to basically destructure the whole Container struct, pass it to a function and then instantiate it again. This is of course very undesirable and bad for encapsulation.

I have not thought about that tbh. My crate is meant to do numerical simulations of cellular ensembles. To me this means, the crate will most certainly not depend on itself indirectly. Thus I will not worry too much about multiple dependencies stacked along each other. However, it's good to know and consider for the future. I will definitely keep it in mind.

Consider the example with 1 feature. Now we have 2 implementations: one with feature on and another one for the disabled feature.
For 2 features, we would be required to have 4 implementations:

Implementation Feature 1 Feature 2
1
2
3
4

For 3 features, we would be required to write 8 implementations:

Implementation Feature 1 Feature 2 Feature 3
1
2
3
4
5
6
7
8

As I was mentioning earlier, it might be possible to write a proc_macro that can actually generate these implementations when fed with the correct source code. However, I am extremely hesitant to first of all make an effort to write something like this and further completely trust its output when combined with default generic paramters T=() (not even mentioning the mess it produces when viewed in any IDE).

Ok, that shows combinatorial explosion (e.g. 2^N), which I believe we agree on.

I was asking specifically how it manifests. Because the sample code above with fn get_second_element(&self) -> S does not need T at all, meaning that your 2-ary table with four implementation cases in reality only has two cases.

Implementations 1 and 4 are either invalid or "don't care" states. Not this. The get_second_element example only has one feature. It's binary, two states obviously. But with >1 features, there are still cases that you don't need multiple implementations in line with 2^N.


Say you have three features, meaning three type parameters. I'll presume you will have some method on the type that requires all three features (hence all three types). That's expected, and it's only one impl block. The other feature combinations don't need an implementation for this method. What would they even do in the case that any of the types are unknown or defaults to unit?

1 Like

Keep in mind that the example above only contains one feature!

Let's assume that we would like to have one big function that uses all of our generic parameters (if present). I will write down the definition of the struct (which is working fine at the moment) and a possible implementation with features (which is currently not working). Afterwards I will show how to make it work but why this results in huge code duplication.

struct Container<#[cfg(feature = "f1")] A, #[cfg(feature = "f2")] B, #[cfg(feature = "f3")] C> {
    #[cfg(feature = "f1")]
    t1: A,
    #[cfg(feature = "f2")]
    t2: B,
    #[cfg(feature = "f3")]
    t3: C,
}

// This will not work
impl<
    #[cfg(feature = "f1")] A,
    #[cfg(feature = "f2")] B,
    #[cfg(feature = "f3")] C
> Container<
    #[cfg(feature = "f1")] A,
    #[cfg(feature = "f2")] B,
    #[cfg(feature = "f3")] C
> {
    fn get_elements(self) -> (
        #[cfg(feature = "f1")] A,
        #[cfg(feature = "f2")] B,
        #[cfg(feature = "f3")] C,
    )
    where
        #[cfg(feature = "f1")]
        A: num::Float,
        #[cfg(feature = "f2")]
        B: num::Float,
        #[cfg(feature = "f3")]
        C: num::Float,
    {
        (
            #[cfg(feature ="f1")] self.t1.powf(1.0),
            #[cfg(feature = "f2")] self.t2.powf(2.0),
            #[cfg(feature = "f3")] self.t3.powf(3.0),
        )
    }
}

// We will have to write every implementation manually
// Features f1, f2, f3 enabled
// No features disabled
#[cfg(all(feature = "f1", feature = "f2", feature = "f3"))]
impl<A, B, C> Container<A, B, C> {
    fn get_elements(self) -> (A, B, C)
    where
        A: num::Float,
        B: num::Float,
    {
        (self.t1.powf(1.0), self.t2.powf(2.0), self.t3.powf(3.0))
    }
}

// Features f1, f2, enabled
// Feature f3 disabled
#[cfg(all(feature = "f1", feature = "f2", not(feature = "f3")))]
impl<A, B> Container<A, B> {
    fn get_elements(self) -> (A, B) {
        (self.t1.powf(1.0), self.t2.powf(2.0))
    }
}

// Features f1, f3, enabled
// Feature f2 disabled
#[cfg(all(feature = "f1", not(feature = "f2"), feature = "f3"))]
impl<A, C> Container<A, C> {
    fn get_elements(self) -> (A, C) {
        (self.t1.powf(1.0), self.t3.powf(3.0))
    }
}

// ... and so on and so forth. For 3 features, we need to write 8 `impl<...>` blocks.

For the specific function I presented here, we could make it more simple since all arguments are used in a very similar manner. However, for more complex functions it is not obvious how to do this.

How do you even call get_elements() when its return type has the same complexity as feature combinations? I think this is the crux of the issue that I am having trouble with. The answer is clearly that all call sites need the same feature gating!

But first you have to call the constructor for Container, and that's a whole can of worms itself.

#[cfg(any(
    all(feature = "f1", not(feature = "f2"), not(feature = "f3")),
    all(not(feature = "f1"), feature = "f2", not(feature = "f3")),
    all(not(feature = "f1"), not(feature = "f2"), feature = "f3"),
))]
let c: Container<f32> = Container::default();
#[cfg(any(
    all(feature = "f1", feature = "f2", not(feature = "f3")),
    all(feature = "f1", not(feature = "f2"), feature = "f3"),
    all(not(feature = "f1"), feature = "f2", feature = "f3"),
))]
let c: Container<f32, f32> = Container::default();
#[cfg(all(feature = "f1", feature = "f2", feature = "f3"))]
let c: Container<f32, f32, f32> = Container::default();
#[cfg(all(not(feature = "f1"), not(feature = "f2"), not(feature = "f3")))]
let c: Container = Container::default();

let foo = c.get_elements();

#[cfg(feature = "f1")]
println!("A: {}", foo.0);

#[cfg(all(feature = "f1", feature = "f2"))]
println!("B: {}", foo.1);

#[cfg(all(not(feature = "f1"), feature = "f2"))]
println!("B: {}", foo.0);

#[cfg(all(feature = "f1", feature = "f2", feature = "f3"))]
println!("C: {}", foo.2);

#[cfg(any(
    all(feature = "f1", not(feature = "f2"), feature = "f3"),
    all(not(feature = "f1"), feature = "f2", feature = "f3"),
))]
println!("C: {}", foo.1);

#[cfg(all(not(feature = "f1"), not(feature = "f2"), feature = "f3"))]
println!("C: {}", foo.0);

The only way you can realistically use this API is with type inference and forwarding the return type to some other function as a block box. No one wants to maintain all of these conditional attributes.

Once again, I agree with you in theory that there are cases of combinatorial explosion, but this impractical on all accounts. This API is borderline unusable.

Its usability can be improved significantly by not trying to implement function overloading in Rust. This isn't what you want to hear, but I'm afraid the language is going to be hostile to your use case. Compare with a Rust-friendly variation:

use std::marker::PhantomData;

#[derive(Default)]
struct Container<A = (), B = (), C = ()> {
    _phantom_a: PhantomData<A>,
    _phantom_b: PhantomData<B>,
    _phantom_c: PhantomData<C>,

    #[cfg(feature = "f1")]
    t1: A,
    #[cfg(feature = "f2")]
    t2: B,
    #[cfg(feature = "f3")]
    t3: C,
}

#[cfg(feature = "f1")]
impl<A, _B, _C> Container<A, _B, _C>
    where
        A: num::Float,
{
    fn get_element_a(&self) -> A {
        self.t1.powf(num::one())
    }
}

#[cfg(feature = "f2")]
impl<_A, B, _C> Container<_A, B, _C>
    where
        B: num::Float + num::FromPrimitive,
{
    fn get_element_b(&self) -> B {
        self.t2.powf(B::from_f32(2.0).unwrap())
    }
}

#[cfg(feature = "f3")]
impl<_A, _B, C> Container<_A, _B, C>
    where
        C: num::Float + num::FromPrimitive,
{
    fn get_element_c(&self) -> C {
        self.t3.powf(C::from_f32(3.0).unwrap())
    }
}

fn main() {
    let c: Container::<f32, f32, f32> = Container::default();

    #[cfg(feature = "f1")]
    println!("A: {}", c.get_element_a());

    #[cfg(feature = "f2")]
    println!("B: {}", c.get_element_b());

    #[cfg(feature = "f3")]
    println!("C: {}", c.get_element_c());
}

That's it! The whole thing in about 60 lines of code. Three implementations. The constructor is obvious, the return types are obvious. The number of implementations are linear with the features. The conditional compiling attributes are limited in scope to where they are important. Enabling features won't break public APIs.

1 Like

tldr: I may be bad at designing the entry point to my software but the underlying problem is still the same.

In my use-case, I do not have a default implementation. Every property of the Container struct cannot by definition have a default value. Everything must be specified by the user. If something is not meant to be specified it should not appear at all (distinct to default value).
I am writing a simulation software (think about particle sim for biological cells with some custom rules and some other stuff going on) and parts of my struct are responsible for individual aspects of my simulation. There are some mandatory arguments such as the domain and cells. However, I can choose to include additional model aspects such as external effects or not describe them at all.
If the user wishes to specify them, a feature flag provides an additional field (right now I am still instantiating by values - I know I should probably change this in the future) where these properties need to be included.

Let me make an example of why the distinction between not implementing something and setting something to default values actually makes a difference.
In my case I can model intracellular reactions by Ordinary Differential Equations. A cell which is not participating in any reactions will have a function that simply returns 0.0 since this means that there is no change whatsoever. However, the cell might still possess intracellular concentrations which can be read out and modified by other parts of the simulation. Additionally, there may be other cell-types which do participate in these reactions and have non-zero return values. In this scenario, we need a default implementation and want to activate the feature to actually solve the corresponding model property.
On the other hand, if I do not want to describe intracellular reactions at all, I do not want to expose their API to the user to avoid confusion. This also means that specifying this part of the simulation is simply not needed and even wrong (since the solver does not know what to do with it).

I could have implemented my system in such a way that incompatibilities like described above are detected at Runtime rather than compile-time but I wanted to avoid this principle and prefer to write code that will only compile when given valid state and throw Runtime-errors when the simulation itself encounters numerical problems, IO problems etc. Defining a model is part of the compilation process. Going from this approach to detection at Runtime is much more simple than the other way around.

Maybe I can do a better job at designing the user-interface. I can see the reasoning behind your comment. I will think about a different approach to my entry point (possibly with a macro) but so far this is not my priority. The implementation details behind the entry point will still underlie the same problems described above.

Addition:
The general thought process behind my approach:

  • Try to compile as much as possible (avoid resolving at runtime)
  • Only compile what will also be an acceptable model
  • Additional aspects (properties) of my model require additional structs or methods
    • they must be supplied by the user to yield a valid model
    • they have predefined functionalities via generic traits
  • Formulate model as abstract as possible
    • avoid description of model aspects by concrete type eg. positions of cells by nalgebra::SVector<f64, 3> to allow for more complex descriptions of the same aspect.
    • use generic Type Pos which has trait bounds Add<Self, Output=Self> and Mul<f64, Output=Self> etc.

These principles seem to be very abstract but in practice do allow me to formulate many different models without needing to change my backend code (since it operates on generics with trait bounds). I even have first results to prove that it is not only effective but also performant.

I would like to challenge you to think about my specific use-case which is probably not similar to many other libraries out there.
That being said, I am always looking forward to hear feedback.

Even number of implementations being linear with number of features is undesirable in my case. I have approximately 1000 lines of code. Even duplicating this code is too much to ask. But I will assess if I can modularize my code more effectively to possibly reduce the number of feature-dependent lines and rewrite in such a way. But there will still be loads of code-duplication going on which is also extremely hard to maintain (especially if additional features come along).

“depend on itself” isn't the problematic case. Rather, consider a dependency graph that looks like this, where two different libraries both use your library:

some-huge-application-that-can-be-told-to-do-many-things
|
+--- cellular-simulation-type-1-written-by-alice
|    |
|    +--- your-crate (A, B)
|
+--- cellular-simulation-type-2-written-by-bob
     |
     +--- your-crate (B, D)

When this is compiled, Cargo will unify the features and compile your_crate with features A, B, and D. Then, neither user will compile successfully, because you have designed non-additive features.

Part of the reason the thing you are trying to do is hard is because it does not fit the Rust compilation model; if it weren't considered incorrect, then there might be already-invented facilities to make it work easily.


Now, as to solving the problem instead of explaining it, here are another couple of ideas:

  1. Make the caller provide the structs. That is, instead of features controlling how your crate defines its data structures, the caller writes their own structs containing the data applicable to their situation, and implements traits telling your crate how to manipulate them. If the traits are boilerplatey, provide them in macros.

  2. Make your structs generic over a tuple of type parameters instead of many direct type parameters. That way the tuple can vary in length. (This only works out if the purpose of each parameter can be determined by its type, but you can introduce “newtype” structs to make the types different.)

4 Likes

Will this also be the case for two completely different crates which simply share a common feature name? If this is the case why is it not considered unhygienic? Again, I was not aware of this problem. But thanks for pointing out!

If I understand your remarks correctly, this is already happening. My current initialization looks like this:
I provide a SimulationSetup struct which has fields that are filled out by types with certain traits implemented on them. For example I can specify the type CartesianCuboid2 as a domain value. The CartesianCuboid2 type will implement the Domain<...> trait (which has some generic parameters which will need to be specified by the implementation. The code could look something like this:

let domain = CartesianCuboid::new(...);
let setup = SimulationSetup {
    domain,
    ...
}

Behind the scenes, SimulationSetup will take a generic type for domain and methods which later use this type will require the correct trait to be implemented.
Does this match with what you suggested?

I do not completely understand your approach. Consider the example below. Apart from naming conventions for the new_x function, this would in fact provide me with a reasonable entry point. Some first observations:

  1. Order of types will be important
  2. I could probably skip the Setup struct and directly define SimulationSupervisor from a tuple of types
  3. Everything so far seems to require different function names for different amounts of generic parameters
  4. It will still expose the API to the user which was previously associated with a feature.
  5. If I have trait bounds, I will need to implement the associated traits for my default generic paramter () (of course I can always rename this to something else entirely.)

I still do not get what you meant by the "newtype" structs. Can you clarify please?

struct Setup<T> {
    types: T,
}

struct SimulationSupervisor<F, S=()> {
    float: F,
    something: S,
}

impl<F, S> SimulationSupervisor<F, S> {
    fn new_1(setup: Setup<(F, S)>) -> SimulationSupervisor<F, S> {
        SimulationSupervisor {
            float: setup.types.0,
            something: setup.types.1,
        }
    }
}

impl<F> SimulationSupervisor<F> {
    fn new_2(setup: Setup<F>) -> SimulationSupervisor<F> {
        SimulationSupervisor {
            float: setup.types,
            something: (),
        }
    }
}

struct SomeStruct();

fn main() {
    let setup = Setup {
        types: (1.0_f64, SomeStruct()),
    };
    
    let supervisor = SimulationSupervisor::new_1(setup);
    
    let setup2 = Setup {
        types: 1.0_f64,
    };
    
    let supervisor2 = SimulationSupervisor::new_2(setup2);
}

I think I need to make a more broader statement here concerning my use-case in addition to what I have said in my above post:

I am not fixed on solving my problem with feature guards. To me it looked like the correct tool to be using at the time. I would prefer not having to deal with them as well as not write any macros at all. Maybe a more systemic approach to my problem would be required. However, I do not think that any of the proposed solutions can truly satisfy my needs. The reasoning behind my initial approach was this:

  1. Zero-overhead and Huge flexibility by using Generics with trait bounds (no Box<dyn Trait>)
  2. Omit everything which is not needed at compile-time (not setting to None but rather do not write this part of the code)
  3. Allow to scale this approach with additional generic parameters without the need to rewrite all of my functions
  4. I was following an object-oriented style (but I am willing to try whatever gets me to my goal)

No, feature names are crate-specific. In my example Cargo unifies the set {(your-crate, A), (your-crate, B)} and the set {(your-crate, B), (your-crate, D)} to get {(your-crate, A), (your-crate, B), (your-crate, D)} and concludes that your-crate must be built with features {A, B, D}.


  1. Order of types will be important

No, if you were to use this strategy, what you would do is write your implementations to be generic over any tuple whose components each implement some simulation-component trait. Thus, swapping elements in the tuple would produce the same results (except maybe for some kind of evaluation-order dependence, depending on what you're doing).

It may or may not be possible to actually do this and support the kinds of interactions you want. But, if it is possible to enable any arbitrary combination of your features, that hints that it might be.

I still do not get what you meant by the "newtype" structs.

By newtypes I mean like:

struct Position(pub f32);
struct Velocity(pub f32);
impl Something for Position { ... }
impl Something for Velocity { ... }

You get to pick different behaviors for the differently-wrapped types even though the data is the same.

But wouldn't this also blow up my code-base significantly again?

And I do have non-trivial dependencies between features. For example the feature fluid_mechanics_gradients will require the feature fluid_mechanics to be active. So far what this does, is it changes the trait FluidMechancis which is related to the aforementioned feature by adding another function which is required and responsible to calculate the gradients. This is truly on-top and would not make sense without fluid_mechanics being enabled. I could split them into two traits FluidMechanics and FluidMechanicsWithGradients. But since users need to implement these traits, I do not know if this might be confusing or possibly even better.

I'm sympathetic to your use case. And I apologize if my last comment was too negative or aggressive. There are valid uses where metaprogramming is beneficial, no doubt. My concern beyond the surface-level API issue is that the following pattern is an example of function overloading, ignoring all other details:

#[cfg(all(feature = "f1", feature = "f2", feature = "f3"))]
impl<A, B, C> Container<A, B, C> {
    fn get_elements(self) -> (A, B, C)
}

#[cfg(all(feature = "f1", feature = "f2", not(feature = "f3")))]
impl<A, B> Container<A, B> {
    fn get_elements(self) -> (A, B)
}

Function overloading is fundamentally incompatible with Rust's type system. Type inference [1] has been cited as the primary conflict. This explains why it's so hard to do.

Traits with associated types might be helpful, but only if the "variadic parametric types" are replaced with a fixed number of parameters. [2] It works, and it moves all of the combinations into a single impl block.

use std::marker::PhantomData;

#[derive(Default)]
struct Container<A = f32, B = f32, C = f32> {
    _phantom_a: PhantomData<A>,
    _phantom_b: PhantomData<B>,
    _phantom_c: PhantomData<C>,

    #[cfg(feature = "f1")]
    t1: A,
    #[cfg(feature = "f2")]
    t2: B,
    #[cfg(feature = "f3")]
    t3: C,
}

trait GetElements {
    type Elements;

    fn get_elements(self) -> Self::Elements;
}

impl<A, B, C> GetElements for Container<A, B, C>
where
    A: num::Float,
    B: num::Float + num::FromPrimitive,
    C: num::Float + num::FromPrimitive,
{
    #[cfg(all(not(feature = "f1"), not(feature = "f2"), not(feature = "f3")))]
    type Elements = ();
    #[cfg(all(feature = "f1", not(feature = "f2"), not(feature = "f3")))]
    type Elements = (A,);
    #[cfg(all(not(feature = "f1"), feature = "f2", not(feature = "f3")))]
    type Elements = (B,);
    #[cfg(all(not(feature = "f1"), not(feature = "f2"), feature = "f3"))]
    type Elements = (C,);
    #[cfg(all(feature = "f1", feature = "f2", not(feature = "f3")))]
    type Elements = (A, B);
    #[cfg(all(feature = "f1", not(feature = "f2"), feature = "f3"))]
    type Elements = (A, C);
    #[cfg(all(not(feature = "f1"), feature = "f2", feature = "f3"))]
    type Elements = (B, C);
    #[cfg(all(feature = "f1", feature = "f2", feature = "f3"))]
    type Elements = (A, B, C);

    fn get_elements(self) -> Self::Elements {
        #[cfg(all(not(feature = "f1"), not(feature = "f2"), not(feature = "f3")))]
        {()}

        #[cfg(all(feature = "f1", not(feature = "f2"), not(feature = "f3")))]
        {(self.t1.powf(num::one()),)}

        #[cfg(all(not(feature = "f1"), feature = "f2", not(feature = "f3")))]
        {(self.t2.powf(B::from_f32(2.0).unwrap()),)}

        #[cfg(all(not(feature = "f1"), not(feature = "f2"), feature = "f3"))]
        {(self.t3.powf(C::from_f32(3.0).unwrap()),)}

        #[cfg(all(feature = "f1", feature = "f2", not(feature = "f3")))]
        {(self.t1.powf(num::one()), self.t2.powf(B::from_f32(2.0).unwrap()))}

        #[cfg(all(feature = "f1", not(feature = "f2"), feature = "f3"))]
        {(self.t1.powf(num::one()), self.t3.powf(C::from_f32(3.0).unwrap()))}

        #[cfg(all(not(feature = "f1"), feature = "f2", feature = "f3"))]
        {(
            self.t2.powf(B::from_f32(2.0).unwrap()),
            self.t3.powf(C::from_f32(3.0).unwrap()),
        )}

        #[cfg(all(feature = "f1", feature = "f2", feature = "f3"))]
        {(
            self.t1.powf(num::one()),
            self.t2.powf(B::from_f32(2.0).unwrap()),
            self.t3.powf(C::from_f32(3.0).unwrap()),
        )}
    }
}

It is still difficult to use at the call sites if we want to access the tuple fields. But that hasn't been addressed at all so far in the thread (AFAICT, maybe I missed it). The code itself is quite difficult to read, but it might help to split the method into separate modules (one for each feature combination) and then it just becomes calling into the inner impl. All of the #[cfg] attributes will still be there, just spread out across multiple modules instead of crammed into a single method.

The combinations need to exist somewhere. And that is constrained specifically by the design choice to vary the return type.

This is good info. I think the suggestion to make the caller pass the type is probably the actual right tool. Traits are capable of abstracting over the details (look what GetElements did for the "variadic parametric" type). Macros are useful as general "syntax generators", which is why they are appealing for cases like conditionally adding type parameters. With a single generic parameter defined by the caller, you won't even need a macro.


  1. Is there a simple way to overload functions? - #9 by ExpHP and Justification for Rust not supporting Function Overloading (directly) - #3 by scottmcm ↩︎

  2. Yeah, I'm pretty adamant that #[cfg] attributes on generic parameters is a non-starter. ↩︎