Implementing Zero trait for an enum type

Also, if you just want to accept AnyNumber why do you need to implement Zero? You can already create a zero of any Variant of AnyNumber.

Is there a reason why this can't be:

fn numbers<T: PrimInt>(input: &Series<T>) -> Result<impl Iterator<Item = T>>> {
    Ok(input.into_no_null_iter())
}

Yes. The Matrix is of type AnyNumber. I'm definitely open to rethinking how I'm going about this. The over-all is as follows.

I'm starting with a polars::DataFrame. It has a type called Series. It hosts ChunkedArrays where I can extract the underlying data using the following. If I know the type is u8:

series.u8()?.into_no_null_iter()

u8 implements all the traits required to run the decile that leverages nalgebra. Those traits include Add, Ord, Zero... among others.

The return type of decile is Vec<u32> - fixed. Once I compute decile, I append it to my polars DataFrame.

I don't know ahead of time the type underlying the Series. I want to be able to provide a service that lets users compute decile for a Series that has a compatible underlying data type (e.g., not String, nor struct etc.).

Does that make sense?

I need to include floats... but let me play with what you are proposing using another trait.

Okay, then "T: Num + FromPrimitive + ToPrimitive + …", for example.


See also this post by me (which also covers NumCast).

1 Like

@jbe Is Wrapper<T> relevant for what I'm doing?

I don't think Wrapper<T> is what you need in either case (assuming you mean wrapper::Wrapper).

That trait allows you to unwrap a particular type T.

In contrast, Num and the casting traits allow you to convert from/into any primitive numeric types.

use num::{Num, ToPrimitive};

fn foo<T: Num + ToPrimitive>(arg: T) {
    println!("as double: {}", arg.to_f64().unwrap());
    if let Some(fitting_int) = arg.to_i16() {
        println!("as i16: {}", fitting_int);
    }
}

fn main() {
    foo(25);
    println!("---");
    foo(5.8);
    println!("---");
    foo(10000000);
}

(Playground)

Output:

as double: 25
as i16: 25
---
as double: 5.8
as i16: 5
---
as double: 10000000


Another example:

use num::{Num, NumCast, ToPrimitive};

fn bar<T, U>(a: T, b: U) -> T
where
    T: Num + NumCast,
    U: ToPrimitive,
{
    a + <T as NumCast>::from(b).unwrap()
}

fn main() {
    let a: f64 = 6.2;
    let b: i32 = 5;
    println!("{}", bar(a, b));
}

(Playground)

Output:

11.2

1 Like

So, here is my take.
At the beginning you have

series

This is untypped, in a sense.
Then you apply

series.u8()
series f32()
...

to get into a typed world.
Then, for each of your numeric types, you apply a function. If possible, write this function generically over your numeric type. Inside this generic function, you can use all sorts of intermediate data structures. In particular, object-safety is never needed

(Object safety is needed if you want to unify your numeric types into a single type, but my proposal is insteaf to have "parallel code flow lanes")

1 Like

Maybe converting everything to f64 may make life simpler?

1 Like

@curoli you are correct. However, it will be a highly used design approach (i.e., not just "decile"), and computation therein (each computation will be used lots). I'm not yet ready to surrender to the cost of f64 everywhere (I was ok with the storage size, but not the computational expense). Talk to me on Monday and see if I'm still feeling the same enthusiasm! :))

All in all, I'm starting to get a sense that I should try to leverage monomorphization more than I am in how I think about these types of problems. Along with this, I'm also getting the idea that it is less about the ability to cast between types.

@MichaelV Since yesterday I have been thinking "back to the drawing board", i.e., "take a step back". Your thinking is exactly where I'm at.

As of now, based on what you are saying, the issue is when and how do I go from the "untyped" world to the typed world. I can "peek" at the underlying type in the Series, then delegate.

The tasks:

  1. extract the data from Series
  2. enumerate and sort
  3. run the computation T -> u32 (in this case)
  4. restore the original sort order

The computation knows nothing about Series. It just needs Vec<_> as input. Thus the separate tasks.

I get stuck at how to go from Series -> Vec<N: AllRequiredTraits>? (Note: filtering out the concrete types that don't implement AllRequiredTraits is trivial).

Here is an implementation:

Is this helpful?

(Sorry, yesterday I was on mobile, and I'm still unable to do examples productively on mobile. Otherwise, I would have added this already yesterday).

2 Likes

@MichaelV Thank you very much. The code outlines the task really well.

If I'm reading all of this correctly, where I went wrong was by trying to implement Number (~Constraint) on my enum AnyNumber. What I need to do is implement the trait as you have, for each supported primitive.

1 Like

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.