If you used computers to do number-crunching long before getting into Rust, there is one particular Rust design decision that is pretty much guaranteed to feel jarring to you: mathematical functions being postfix methods, i.e. sin(x) + 3*log(y) being written as x.sin() + 3.0 * y.log().
Now, some people may have the brain plasticity it takes to rewire their numerical computing habits and get used to writing code using the latter notation, and most importantly to quickly read code that uses it. But after a couple of years of continuously using Rust along with other languages that use prefix operators, it is becoming pretty clear that my brain won't.
I don't think that would fully resolve it, in the sense that if I implemented Fn(T) for T: Float and T: Real, the compiler would complain because the two generic impls overlap.
For the particular case of Float and Real, that could probably be worked around by implementing the trait for T: Real and enforcing Float: Real at the num_traits level (not sure why that's not the case actually). Problems would arise, however, if two traits represent intersecting sets where each set has elements that the other set doesn't.
I agree that this was not a good example, in the sense that in this case, one trait is a superset of another. To get a real problem, you need a situation where two traits intersect but one is not a superset of another, such as in the case of FloatCore and Real for example.
(FloatCore is specific to IEEE-style floating-points, so not all Real types will implement FloatCore. But implementing Real requires access to some kind of libm, so in no_std scenarios, some types will implement FloatCore but not Real.)
I wish I could somehow factor out the simple substitution rules that I have (self -> self_: T, Self -> T...) instead of writing a dozen variants of the same thing, but I have the impression that this is not actually possible given current macro_rules limitations.
Nevermind, I just remembered that declarative macros can expand to types and used that to resolve 90% of this particular problem.
I actually did that in my parser, but it creates ambiguity and needs type information to parse correctly.
Is "sin" a function with a single argument, or a variable name? [sin(x)] vs [sin · x]?
That falls apart as soon as you allow user defined functions…
I did end up parsing into an AST that does not resolve that problem and process it further when type information is available.
I do not understand the enthusiasm for 100% whitespace-separated expression grammars.
To me, these feel like they were specifically engineered to make programmers who can't remember their operator precedence rules (i.e. most of us) miserable.
For sure, you can put parentheses everywhere until it stops looking ambiguous, but then there really isn't a point in using a whitespace-separated expression grammar anymore... And if it's optional, there's no guarantee that your colleague at work will do it.
Therefore, whitespace as a separator is IMO best reserved to operators with well-known precedence rules from mathematics.
// TODO: Try to nerd-snipe a macro expert like dtolnay into either
// generalizing/deduplicating this, or proving that it cannot be done
// while operating within macro_rules' limitations.
// I don't want to go for full proc macros because the compile time hit is
// too high while declarative macros do the job, although in a clunky way.
Nice work with the macro by the way. Looking at the docs I was expecting it to be thousands of lines of manual copy-pasta, but the macro gives you enough syntactic sugar that the full invocation fits into a single file (and is actually fairly readable!).