I am implementing couple of algorithms which should work for both f32 and f64. Most of them work either with dense ndarray matrices or with sprs sparse matrices.
So most of my code will have signature like <W: Float>, where W can be either f32 or f64. But the trait bound is actually not that simple.
What I found out is that:
Using num::Float is not enough for ndarray due to missing 'static annotation.
Using ndarray::NdFloat works for ndarray, but not for sprs, because sprs needs Default trait
For reason above <W: NdFloat + Float> does not work either for sprs.
I ended up using <W: NdFloat + Num + Default> and I feel that something is really wrong.
Is there some plan to polish this mess?
This topic was my approach Floating point traits roundtable ; what I could see were just wildly different ideas and approaches, and no agreement on the right path to take. So I think a small working group would be a better approach
About NdFloat: no method or function or type in ndarray requires NdFloat. It's a trait that is admittedly viral if you want to use it; it's a convenience for working with exactly f32, f64. It is also opinionated, for example in that it requires formatting traits — if you are writing numerical code no doubt your numbers can be printed, and no doubt are you going to want to do so for debugging purposes.