Please excuse my limited English. I have some questions regarding types and generics in Rust.
Currently, my implementation is as follows. Suppose I want to create a fairly general Complex type ( I know that there is a well-designed library num_complex for Complex, I’m just using it to introduce this as a question.)and its from_polar method. I defined NumOps and FloatOps traits.
However, I feel that my implementation and type abstractions are somewhat lacking. I’m not exactly sure where the shortcomings are at my current level. Could anyone take a look and suggest better ways to implement this, or share some tips on abstraction techniques?
my suggestions won't be exhaustive by any means, but some stuff I notice:
You don't have any restrictions on T for the pub struct Complex<T> declaration of your Complex<> struct, meaning you could instantiate, say, a Complex<String> or Complex<bool> – probably not what you want. Make sure to have your trait bounds on that T too!
Is there a reason FloatOps requires Clone? This seems strange to me, since numbers implement Copy. You're also calling .clone() on r and theta, which should be Copy-able numbers, if I'm not mistaken?
I assumed you might already be aware of this, but there does exist the num_traits crate for generic mathematics, which will allow you to constrain a type to numeric types like i32 or f64.
Otherwise, I assume you're just trying to implement these yourself to get a feel for traits and abstraction, which I think is always a fun exercise. I'd recommend using the #[duplicate_item()] macro from the duplicate crate to cut down on code repetition, you can implement the same logic for all of your desired number types in one definition =)
Other than that it seems fine. Like @Sup2.0 mentioned you can use the num_traits crate. Requiring Clone instead of Copy is fine unless you want to guarantee that the implementing types can be duplicated by a bitwise copy of the source.
Note that the output type is not always the same for the different operations. For example if you use types that record the unit of measurement in the type, when you multiply two lengths you get length^2, but when you add them you get another length. Example: 5m * 4m = 20m² whereas 5m + 4m = 9m. This may not matter to your use case.
Oh wow, interesting. Is there a philosophy behind that?
I can imagine maybe that'd allow storing an intermediate HashMap<J,V> that you want to .into_iter().map() to turn into a proper HashMap<K,V>, but I'd just keep it as an Iterator or maybe Vec<(J,V)> for that. Surely having a struct with none of the methods you expected to be implemented would throw you off?
I'm pretty new to Rust too, still learning every day :]
I think it's about flexibility. You may be able to sort a Vec<u32> but not a Vec<SomethingWithNoOrder>. If the bounds were placed on the type definition then you wouldn't be able to construct a Vec<SomethingWithNoOrder> or not implement .sort.
Generally you only place bounds on the type definition if you need something from that bound. For example if you want to create a struct that uses an associated type on a trait.
oh, no certainly, I've seen and used this a lot for conditionally implemented specialist methods.
It just seems counterintuitive to me to apply that to a whole struct, since, for instance for a HashMap, isn't the keys being hashable essential for being able to construct the hashmap at all?[1]
And in OP's case since it's representing a complex number, I'd think you want any Complex<T> representation to be one of numbers, not any arbitrary object.
ngl I would not be surprised if you tell me without K: Hash you just can't call ::from(), but then what's even the point of being able to declare a HashMap<J,V>? ↩︎
It's because it simplifies using the type in other places that don't necessarily care.
For example, while a BinaryHeap<T> needs T: Ord for most things, it doesn't actually care about T: Ord for BinaryHeap<T>: Debug -- for that it just needs T: Debug.
So it's nice to not need to write where T: Debug + Ord if you only wanted to be able to debug-format your BinaryHeap<T>. And that's particularly true with derives.
See, for example, how as of recently you no longer need T: Ord to make an emptyBinaryHeap<T>:
I see!! yes, this makes so much sense. I've been bitten soo much in Haskell before where using some polymorphic function required me to add yet another constraint to an outer function, which got quite tedious.
Tradeoffs at the end of the day, as is much of software development. Thanks for explaining, much appreciated!
That normalcy has asterisks to it. Is it often preferred? Yes. There are style guides promoting it, the stdlib also does it.
However, the reason for why it is preferred, is that the trait bounds are viral: if you put them on the type definition, you will have to repeat them absolutely everywhere.
The "late bounds" style is basically a user-space bodge for a deficiency in Rust's design.
Even if more implied bounds were available, they wouldn't be a slam-dunk. Today I can have a type like this...
enum E<T> {
V1(Vec<T>),
V2(HashSet<T>),
// ...
}
...and use it with any sized T. But that wouldn't be possible if HashSet<T> had a where T: Eq + Hash bound, implied or not. The bounds are still viral in the sense that they're inflicted on downstream; the viral nature is not just about having to repeat bounds. The type as a whole is considered non-well-formed without the bound.
And once you have an implied bound, it's a breaking change to remove that bound. Whereas non-implied bounds can be loosened because they have to be restated elsewhere, instead of assumed. Making a bound implied is a commitment, similar to how removing a bound is.
Ideally we would have opt-in implied bounds. But I still feel they'd make the ecosystem worse in some ways, as people will go with them for the ergonomic reasons -- making breaking changes or unneeded bounds more likely, and ruling out downstream uses of the type like my example enum.
I.e. if implied bounds were possible from the start, I'd still hope std was designed how it is today, without the bounds on HashSet and so on.