I believe that most libraries and applications should always prefer checked arithmetic (e.g. a.checked_add(b) instead of a + b ) and conversions (a.try_into()? instead of a as u32 ). Unchecked operations have their purpose, but they shouldn't be the default choice. Unfortunately, there are a lot of inconveniences when using checked alternatives. I created cadd to alleviate most of these inconveniences:
All functions return Result (no more Option s that require ok_or_else ; easy to integate with anyhow or with custom error types).
Error messages are useful: they show the failed operation and its inputs, and even a backtrace (if enabled).
Function names are short and predictable: add "c" in front of the unchecked alternative to get the improved version: cadd , cdiv , cilog , and so on.
.into_type::<T>() and .try_into_type::<T>() adapter: you no longer need to rewrite expr as u32 as u32::try_from(expr)? because of type inference issues.
Pair it with some clippy's lints (arithmetic_side_effects , cast_possible_wrap , cast_precision_loss , cast_sign_loss ) and eliminate unexpected overflows, truncations, and panics from your code.
Many years ago (1985!) we built a library for multiprecision floating point that used a similar syntax. Potential users HATED it. We ended up producing a custom Fortran compiler that turned floating point a+b into add(a,b). On another project where we didn't control the compiler, I wrote an Awk script (500 lines!) that modified the .s file to replace the add instruction with a specialized call to a function that effectively did add(a,b). Is a possible Rust alternative a macro that takes the source of a function with a+b and produces a function with add(a,b)?
It is possible, but it seems to be less flexible and harder to understand and debug. For example, if you want to use different kinds of operations (checked, wrapped, etc.) together, how do you specify the kind of operation in the macro? What happens when an error occurs? Does it return the error as expression or does it "throw" it like ? does? What if there is another ? or even await in the macro (like math!(a + b.await? + c))? No matter how you decide, there will be either too many limitations or too many hidden behaviors. With plain functions, you get the familiar syntax, full flexibility, full compatibility with await, ?, closures, and everything else in the language, and full code completion compatibility in IDEs.
It is a valid approach, but I like it less for the following reasons:
You still have to do something with the error. You can't have a + b * c, you can only have (a + (b * c)?)?.
New types will require type conversions when interfacing with external code.
You can only choose one kind of operation (e.g. checked). It won't work when you need to do different kinds of operations on the same data.
Changing type definitions because you need to do a checked operation seems too intrusive and non-local. Other parts of code may have nothing to do with arithmetic operations, but they may need to be changed just because the data type changed.
That said, new types make it easier to make sure you don't accidentally use a non-checked operation.
Not necessarily -- you could have the type wrap not a T but an Option<T> and thus a + b * c would give you a Checked(None) if either of the operators overflowed.
Huh, cool crate! I do appreciate the more ergonomic nature of this compared to the std checked arithmetic APIs.
One thing I (and I suspect potential users) might find valuable in the docs is a rough order of magnitude number in how big the performance hit is.
This old thread indicates there's a pretty significant impact, and the README already calls that out, but it'd be nice to have a rough idea of how much.