'identity trait'


would it make sense to have some sort of Identity<> trait, perhaps slotting in the ‘operator overload traits’, to define various constants

(as I write this… I realise you can’t use a function as a type-parameter, right, but you can have a ‘callable type’ for passing in static-dispatch lambdas, right)
…or maybe you could just stuff ‘::identity()’ into the operators themselves somehow…

something along the lines of…

Identity<Add<T>>::identity()   = 0
Identity<Mul<T>>  ::identity() = 1
Identity<Min<T>>  ::identity() =  maximum representable value, such as FLT_MAX for f32
Identity<Max<T>>  ::identity() =  minimum representable value,
Identity<And<T>>   = a value with all bits set e.g. 0xffffffff for i32
... <anything else? 'identity for strcat'= empty string?..>

… and then you could go and implement it for your matrix types etc…

possible use case - default input for a ‘fold’ taking a binary operator (select the input function’s identity as the default)


This is sort of Default.


maybe, perhaps one could be implemented in terms of the other; … a generalisation of default?

there are certainly times when a number might default to ‘zero’ (that’s what I’d guess most of the time); but sometimes the usage as a scaling factor would make ‘1.0’ more logical.

I should mention the ‘MinMax’ trick aswell, in C++ i sometimes use ‘negative maximal extents’ for a sort of ‘None’ for an extents object, e.g. .min=+FLT_MAX, .max=-FLT_MAX, update those’extents’ with min / max with input values

… What i’m thinking is in these cases it might be possible to get the ‘sane default’ tied to the usage (a scale factor is associated with multiplies… ‘extents’ are associated with folding with min/max … etc)… maybe there’s some more common patterns waiting to drop out


The num crate has Zero and One traits.


It’s hard to do this properly for floating-point types. Minimum and maximum either do not exist (if you consider NaN among the other values), or are distinct from FLT_MAX (because of +inf). Addition of zero turns -0.0 into 0.0, so it’s not really a neutral element, either.


FLT_MAX is practically useful though; often when I say ‘float’ I really mean ‘not-NaN float’ (if that needs another type… what to call it is a separate discussion). NaN’s are errors that need to be eliminated with empirical testing - a logically sound program wont generate them, and any values that enter the system need to be verified before they filter onward.


@sebcrozet’s alga libray has the Identity trait. There aren’t implementations for And and Or yet, but I’m sure they could be added. I’m not sure about Min and Max though… :thinking:


looks like that could be exactly what I had in mind, so it would be interesting to see which way the details went there


Yes, I think Identity from alga should be what you need. You could for exemple create zero-sized structs for And, Or, Min, Max, implement the Operator trait for those structs and then implement, e.g., Identity<Max> for f64. Allowing the user to write generic operators like your fold idea is definitely one of the use-cases of alga!