Combining traits for a generic type T used throughout crate

Intro: I'm pretty sure I'm going about this in a way that could eventually work out, but it's probably not the most idiomatic way to do this. So I thought I'd ask the question here (Because stack overflow loves to tag things like this as "open-ended").

Question: I'm trying to create a single trait bound definition that provides a central location for setting all required traits for a generic type used throughout a crate. Basically I'd like T throughout the crate to be a floating point number of arbitrary precision (f32 or f64). So, I start with T: Float, but then I find something that needs Debug or Display or various other things, and eventually that list for T stacks up. Further complicating it, all of my structs are built around a lifetime attached to T, so now I have to add that in.

I've found several instances of forms of this being done here and here. And I think I understand the basics of composing a composite trait, if not all the little nuances.

But then this all starts leading to further problems. Eventually the compiler says that the required lifetime is unused somewhere, thus leading to the need to put marker::PhantomData in to provide a target for the lifetime. Which is managed easily enough by adding a const PHANTOM_DATA to the trait. Then functions start asking about adding a 'static lifetime for input variables of type T. And some IDE warnings are showing up, though the compiler doesn't complain about them (yet).

So... Am I on the right track, and should I just keep working out kinks that pop up? Or is there a better and simpler way to do this?

Why do you need a lifetime if you're just working with f32 and f64?

You could declare a type alias instead of trying to make everything generic, and use cargo feature(s) to change the entire crate at once:

#[cfg(feature = "float32")] pub type Float = f32;
#[cfg(feature = "float64")] pub type Float = f64;

You can specify these in the [features] section of Cargo.toml:
https://doc.bccnsoft.com/docs/rust-1.36.0-docs-html/cargo/reference/manifest.html#the-features-section

Floats can't have a lifetime. Lifetimes apply only to references. If some code wants to give a lifetime for T, then don't jump into the rabbit hole, stick 'static in there (since floats can't have a lifetime, you can require them to have any made-up lifetime you want, and made-up 'static is the easiest to work with).

If you need both kinds of float in your program at the same time, then you will have to suffer generic code with lots of bounds for every occasion.

However, if it's enough to select one of them only, then go for a type alias. It will be soooo much easier.

1 Like

The lifetime is for references to the float, not the float itself. I tried to avoid the lifetime for as long as possible, but it came up at some point and I thought my only path forward was to tie the lifetime to T. I should double check the exact reason. Perhaps I missed a point where a 'static lifetime would do... Thanks!

That's actually how I got started down this path. But I wanted to push choice of 64 vs 32 outside of the crate. So I could go back to it and would not be sad to just compile 32 and 64 bit versions and avoid all this complication.

So it sounds like there's no good way to define a single overarching trait that collects and implements all the traits at the center of the Venn diagram between f64 and f32?

Python has typing with Union[type1, type2], though obviously that's just a superficial grouping. Is it just not worth doing in Rust because of resulting spaghetti code with huge sets of trait bounds that end up needing to be compiled for?

I suppose the goal in all of this is optimize performance and minimize memory use. Which does lead to targeting 64 bit and 32 bit specifically and separately. And thus it is probably the best choice from that perspective. I could always write something to wrap the two libraries for selecting f64 vs f32.

Questions about nuances of things in rust always seem to lead to interesting results. Thanks for the thoughts all!

You can define your own combined trait along with a blanket impl, but it starts to get unwieldy pretty fast: (Playground)

Edit: This appears to be more-or-less what you had already found; sorry about the repetition.

use std::ops::*;

trait Arithmetic<T>:
    Add<T,Output=T> +
    Sub<T,Output=T> +
    Mul<T,Output=T> +
    Div<T,Output=T> +
    Neg<Output=T>+
    Rem<T,Output=T>+
    From<i8>+
    Copy+
{}

impl<T> Arithmetic<T> for T where T:
    Add<T,Output=T> +
    Sub<T,Output=T> +
    Mul<T,Output=T> +
    Div<T,Output=T> +
    Neg<Output=T>+
    Rem<T,Output=T>+
    From<i8>+
    Copy+
{}

fn avg<T:Arithmetic<T>>(a:T,b:T)->T {
    a + (b-a)/2.into()
}

fn main() {
   println!("{:?}", (avg(2f32,7f32), avg(5f64,13f64)));
}

Don't limit your sights. LLVM and Zig also support IEEE f16 and f128, so those probably will make it into Rust at some point.

There's also a different 16-bit variant in use for deep learning, also supported by LLVM and Zig. (The Rust proposal is to call the type f16b.) See half, a Rust crate for f16 and f16b.

1 Like

Ahh, thank you, that's very interesting and somewhat in line with what I'm looking to do. I'm not sure if the simulations I'm looking to run would remain accurate down to 16 bits, but it's definitely worth checking out.

A stretch goal for what I'm working on would be to implement SIMD and/or CUDA support (specifically for ML goals). The other stretch goal I have is targeting webassembly. Halving float memory storage for WASM seems ideal. Though WASM also completely throws a wrench into threading. But it's easy enough to use #[cfg(target_arch = "wasm32")] to handle that selection. It's the selection of the float type that's feeling like there should be some form of silver bullet answer. Isn't this exactly the kind of thing that the concept of "zero cost abstractions" is aiming to handle?

The "best" option for now seems to be something based on @2e71828's answer. Then I would only need to write separate implementations for each specific float type in the few places where it matters. Leave all the surface entities/functions using the generic selected Float, and write solver functions specific for f16 / f16b / f32 / f64 / f128 as desired.

Nonetheless, learning the best and most "rust" way to set everything up feels like I keep ending up circling back to refine my original assumptions as I keep learning more details of the inner workings of everything.

f16 is a storage format; it is expanded to f32 when used for computation. Thus intermediate results don't lose precision as rapidly as the f16 format seems to imply.

Contrast that with f16b, where the computations are also performed with limited significance to enable more hardware parallelism in ML accelerators (for a given gate count).

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.