If you used computers to do number-crunching long before getting into Rust, there is one particular Rust design decision that is pretty much guaranteed to feel jarring to you: mathematical functions being postfix methods, i.e.
sin(x) + 3*log(y) being written as
x.sin() + 3.0 * y.log().
Now, some people may have the brain plasticity it takes to rewire their numerical computing habits and get used to writing code using the latter notation, and most importantly to quickly read code that uses it. But after a couple of years of continuously using Rust along with other languages that use prefix operators, it is becoming pretty clear that my brain won't.
So I decided to instead wrap the
num_traits API in a manner that works the way my brain does! And thus, https://crates.io/crates/prefix_num_ops was born.
If we're ever allowed to implement the
Fn* traits for items by hand, then you'd be able to get around the namespace collisions quite easily.
I don't think that would fully resolve it, in the sense that if I implemented
T: Float and
T: Real, the compiler would complain because the two generic impls overlap.
For the particular case of
Real, that could probably be worked around by implementing the trait for
T: Real and enforcing
Float: Real at the
num_traits level (not sure why that's not the case actually). Problems would arise, however, if two traits represent intersecting sets where each set has elements that the other set doesn't.
Float came before
Real -- that constraint would be a breaking change for those implementing
Could you just write
f64::sin(x) + 3 * f64::ln(y)?
I could, however that would get noisy in a more complex expression.
impl<T: Float> Real for T, for all
T: Real, and so you'd use
Real instead of
I agree that this was not a good example, in the sense that in this case, one trait is a superset of another. To get a real problem, you need a situation where two traits intersect but one is not a superset of another, such as in the case of
Real for example.
FloatCore is specific to IEEE-style floating-points, so not all
Real types will implement
FloatCore. But implementing
Real requires access to some kind of libm, so in
no_std scenarios, some types will implement
FloatCore but not
On an unrelated topic, I would be happy if a macro guru like @dtolnay or @geal could have a quick look at the macro that does the bulk of the work of implementing this crate, and tell me if it feels optimal as far as declarative macros go (I do not want to use proc-macros for compile time reasons).
I wish I could somehow factor out the simple substitution rules that I have (self -> self_: T, Self -> T...) instead of writing a dozen variants of the same thing, but I have the impression that this is not actually possible given current macro_rules limitations.
Nevermind, I just remembered that declarative macros can expand to types and used that to resolve 90% of this particular problem.
I would write a DSL like
math!(sin(x) + 3 log(y))
It isn't really complicated either.
If you're creating a DSL, you can go all the way with
math!(sin x + 3 log y).
I actually did that in my parser, but it creates ambiguity and needs type information to parse correctly.
Is "sin" a function with a single argument, or a variable name? [sin(x)] vs [sin · x]?
In a math-oriented language, it seems reasonable to make
sin a keyword to resolve that ambiguity.
That falls apart as soon as you allow user defined functions…
I did end up parsing into an AST that does not resolve that problem and process it further when type information is available.
I do not understand the enthusiasm for 100% whitespace-separated expression grammars.
To me, these feel like they were specifically engineered to make programmers who can't remember their operator precedence rules (i.e. most of us) miserable.
For sure, you can put parentheses everywhere until it stops looking ambiguous, but then there really isn't a point in using a whitespace-separated expression grammar anymore... And if it's optional, there's no guarantee that your colleague at work will do it.
Therefore, whitespace as a separator is IMO best reserved to operators with well-known precedence rules from mathematics.
I've been learning a lot about macros lately so I took up your challenge for evaluating the macro you used. I got rid of 52 lines of code.
I love this little note:
// TODO: Try to nerd-snipe a macro expert like dtolnay into either
// generalizing/deduplicating this, or proving that it cannot be done
// while operating within macro_rules' limitations.
// I don't want to go for full proc macros because the compile time hit is
// too high while declarative macros do the job, although in a clunky way.
Nice work with the macro by the way. Looking at the docs I was expecting it to be thousands of lines of manual copy-pasta, but the macro gives you enough syntactic sugar that the full invocation fits into a single file (and is actually fairly readable!).
You can also use those functions like so:
This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.