How to make rust usable in schools

Rust is an awesome language that empowers highly-scalable architectures, memory safety, and performances but Rust is very different from any other programming language. This is why Rust is hard to learn. If Rust is not used by math teachers this is not because Rust has ownership, lifetimes... but because any simple calculation requires type conversions between floating-point, unsigned, signed...
Indeed, numbers in Rust are painful to manipulate compared to other languages. You need to add type conversions everywhere. I think implementations should be added for Add<uX, fX>, Add<fX, uX>... where the operation keeps the most precise type between the two types. For instance, Add<f32,u8> should return a f32.

    let b = 1.0;
    let a = b * 2;

This doesn't work currently, as a Math teacher I can't force my students to use type conversions everywhere. I could create a crate for that but my students are using the playground so my crate is not usable. And I think this could also make Rust a little more productive if we don't have to use pointless type conversions.

Another thing that is damageable in Rust is that x++ ++x or --x x-- doesn't exist. This kind of operator is friendlier than:

    x += 1;

I know rustaceans are very stubborn and don't want the language to be simpler but these little modifications could make rust more usable in schools.

Thank's for reading.
Leave your impressions, recommendations... below!

1 Like

I don't care that much about the Add and Mul traits, especially when one of the arguments is a floating point number, but I do think Rust made the right call regarding x++ and friends. Rust should optimize for people who use it to build reliable software, and I prefer making it more difficult to introduce bugs over mild inconveniences such as the lack of x++. Using x += 1 is simply much harder to get wrong than x++ because you have to put it on a separate line.

I don't know what specifically you are teaching, but if you need a programmable calculator, why not a Python notebook or even something like Maple?


In mathematics there are all kind of numbers defined:

Natural Numbers Integers, Rationals, Real Numbers, Irrational, Imaginary, Complex...

These are very different objects of different sets. It is important for students of maths to lean the differences.

In maths there are defined functions and notation to perform these conversions from one number set to another as well. For example floor(3.2) = ⌊3.2⌋ = 3.

As such I cannot agree that a programming language should silently convert from one to another will nilly.

Honestly, writing "x+=1;" is hardly more a burden that writing "x++;". I mean really, that saves typing all of one character out of five on the infrequent occasions one writes such thing.

Not having "x++", "++x" removes a bunch of ambiguity that has often gotten C programmers into a mess.

As a mathematician I would expect you to complain about things like this:

Maths students should look at that in despair. WTF x cannot be equal to x + 1.


Adding automatic type conversations to arithmetic probably isn't what you really want, but rather extending your inference of literals to cross the integer/float boundary. I'd love that, and it would address at least the examples you give. Type conversations in expressions can be dangerous, but numeric literals already have type inference, and it could be possible to extend that inference to determine that 2 is actually a f64.

I'm not sure in what circumstances you think ++ would be beneficial. As an expression, as in C or C++, I think it's an unmitigated disaster, leading to the option to write simple code that no human can read (eg c=c++;). Having the meaning of code depend on the order of evaluation is already an issue with function calls having side effects, but adding an operator with side effects makes it even worse. If you're thinking of a x++ statement as on Python, then it's mostly harmless, but only gains you one character of code relative to x+=1.


The world does not need one programming language that is perfect for everything. A programming language is a tool. The best drill is a drill. A drill can even make an excellent screwdriver or wrench, but you wouldn't complain that a drill is hard to use as a hammer. There are languages for math notation. An interpreter for such a language is its own domain-specific concern.

To the point about inferring integers and floating points, I think the compiler's excellent type inference is causing confusion because it works so well. What you're asking for is behind-the-scenes type coercion. Static or strict typing cannot have caveats like that. Inference just means that it can pull type definitions from somewhere else in non-ambiguous contexts. If you want a more general "number" type with strict types, TypeScript offers exactly that. When you have to convert a numeric type it's not because you "need to add type conversions everywhere". It's because those conversions must happen in order for the code to work as written. Rust is requiring you to be explicit about what is happening instead of doing work and hiding it from the programmer. If you want a language that has a strategy for hiding that work then you should use one instead of trying to make a drill usable as a hammer.


First, this is completely subjective. I found Rust completely painless to learn. The secret? Years of C++ and a bit of interest in FP before I had heard of Rust.

Once again, when someone complains that "Thing X is hard to do in Rust", it's not the fault of Rust. It is instead the problem that is hard in general – and indeed, finite-precision numbers are not fun to deal with if our mathematically-trained brains expect integers to be enumerably infinite. (And fallible type conversions in general are also not easy to manipulate correctly.)

The problem is that most other languages don't expose this problem. Instead, they make it easy to write sloppy code, and hard to write correct code.

This, if anything, is the anti-thesis of pedagogy. I have a couple of years of experience teaching C and C++ to people online and
offline (at a traditional university), and the general experience is that most people never properly learn these languages, because it's just too easy to write bad code and Undefined Behavior in them.

Consequently, I wouldn't want anything that made it easier to skim over perhaps annoying but certainly important details. At least let's please keep one single language that doesn't allow the perpetually-reinforced sloppiness.


I ponder the question of starting beginners to programming with Rust from time to time. And the topic has been discussed here. Some argue that it is too complex and beginners need the supposed ease of Python or Javascript or whatever.

I would argue one does not need to expose the whole language to beginners from the get go. Evidence for that is that there are millions of hobby users of the Arduino who are using that most complex of languages. C++. They get by just fine.

Meanwhile one can write Rust as easily as one can write Javascript : Writing Javascript in Rust ... almost

I don't mind that a language like Python or JS will handle 2.1 and 2 in the same way without coercions. After all they make no distinction, they only have one number type, "numbers". In JS everything is a 64 bit float. Not sure what Python does but it handles huge numbers of digits as well.

1 Like

I have a long experience with Python, it has three main numeric types: int, float and complex. int is an integer type, stored in memory just like Rust's i32 (that is, it represents an integer precisely) but int does not have any limit in size. You can express huge numbers like 2**1024 and it will be fine. Only limit is the heap memory CPython can have.

float is just a f64 and nothing more. complex is like a little struct, has fields named real and imag, both are float.

Python does the conversion between number types implicitly, and sadly this is not the only type conversion it does implicitly. Anyway, number conversion in Python never seemed like a problem to me because Python is already a duck typed language and in most cases you can use and convert ints and floats just fine. If you are doing an operation between two ints, you will get an int. And if you are doing an operation between a float and an int, you will get a float. This seems like what @ccgauche wanted.

Also, I don't think Rust is a good language to teaching it as a first programming language only for using in math. Python is very good for that in my opinion, it is ideal for little or middle sized math problems with it's functional programming tools and easy usage. Espacially counting problems, I did solve so much of them for fun with the help of itertools module.


For me the problem here is that the programmer is ignoring the very real mathematical difference between integers and floating-point numbers. Integers are a subset of the real numbers. Floating-point numbers are not real numbers; they are a machine-representable approximation with different characteristics. [This was never more apparent than with the original IBM 360 hexadecimal floating point, where even known-convergent real-number algorithms like Newton-Raphson might fail to converge.]

    let b = 1.0;
    let a1 =  (3 + 10) / 4 * b;
    let a2 =  (3.0 + 10.0) / 4.0 * b;

Computed with real numbers, a1 == a2. However, depending on whatever implicit type coercions the compiler employs, these either fail to compile or may lead to different results.

I personally do not find it onerous to add ".0" at the end of what look like integer literals when I actually mean literals in some floating-point domain. For me at least, leaving that ".0" off the floating-point literal is just evidence of sloppy thinking. Perhaps that's my own idiosyncracy as a mathematician trained in numerical analysis, but I view ignoring the differences between integer/real arithmetic and floating-point arithmetic to be a disservice to the student.


This is the kind of change that sounds like a simplification at first, but actually complicates things more.

  • Problem: It's hard to convert between all these numeric types.
  • Solution: Let the compiler figure it out!
  • Consequence: I still have to understand all the numeric types to understand what my program is doing, and I also have to memorize a table of implicit promotions to understand what the compiler is doing. :frowning:

Implicit numeric promotion is just what C, C++ and any number of other languages do. A more modern interpretation of the same idea is the numeric type hierarchy found in a language like Julia. What do we see with those languages? People still get confused, except now they are confused by both the mismatch between finite types and mathematical abstractions and the specific promotion rules of the language. I didn't even have to go looking for an example because I was reading this already.


(OP: You have managed to find a commonality in the community.)

The maths teacher is (in general) using the computer to solve a problem. A software developer on the other hand creates the program others use. This requires far more rigor* than just one problem. Rust is great in that many faults get found early, (many find this discouraging.)
(* The definition seems fitting to the post.)

Lack of ++ encourages item iteration or ranges over traditional increments.


I'm sure someone somewhere has done exactly that.

Fun fact: you don't even need to do .0! Just adding . is sufficient to make it a float.

1 Like

I know that, but I never do it. I consider Rust's permitting omission of the ".0" (or ",0", depending on locale) to be a minor bad decision in the language, precisely because it leads to what I consider sloppy thinking. (I guess that including the "." or ",", depending on locale, while omitting the leading or trailing zero could be characterized as an improvement, perhaps only semi-sloppy thinking :wink:.) Just my opinion; I'm certain that many others will disagree.

As for [pre|post]-[increment|decrement] operators, they exist solely because of the PDP-11's ISA. The problem I have with them is that they don't specify the increment/decrement value in a way that is visible to later maintenance programmers: is it one byte, or one 16-bit word, or one 32-bit word, or one 64-bit word, or one struct-size entry in an array of structs? Are they place expressions, or not? To me {++x, --x, x++, x--} are simply archaic examples of obfuscated code.


I can tell that none of the people commenting on this thread ever suffered with Fortran II. The "mixed mode arithmetic" that came with Fortran IV was a blessing. The problem wasn't with literals, where you could just add a ".0" on your punch card. It was with variables.

As for teaching, my son was a math major whose first exposure to programming was C++. He'll never write another line of code. Imperative languages don't match the way a math major's brain is wired. I recommend math majors start with a functional language.

1 Like

The language formerly known as Perl 6 Raku is an example of a language that takes the opposite direction from Rust and pushes it as far as it'll go – making numbers work, as much as possible, exactly like a seventh-grade math student's idea of what a number is. I don't know where it falls on the "functional"/"imperative" scale, if such a thing even exists, but if you're looking for a language with a minimum of number-related noise, well, "no floating point noise" is right there in the examples on the home page.

1 Like

That is not my experience. I was first introduced to programming, in an imperative language at age 17. A time when were we all getting into calculus and the like. Pre-university maths. It seemed quit natural.

It had expressions with variables and arithmetic operators that kind of looked like what one sees in algebra (even if that damn "=" sign is used wrongly)

It had sequential statements and "if" conditionals, as one might see in performing the steps of addition or multiplication: "If result greater than 10, subtract 10 and carry a 1 into the next digit".

It had loops, as one sees in all kind of mathematical procedures from adding multi-digit numbers to Euclids Algorithm for GCD to matrix multiplication to Newton-Raphson root finding.

I might guess your son's issue with his first exposure to programming was not that it was in an imperative language but that it was in C++, a horribly complex mess. You see the language I refer to above was BASIC.

Had they started us off with a Functional language like Haskell we would have looked at all that cryptic notation like a heard of non-comprehending dumb cows.

On the other hand, in that year we were also expected to learn assembler and turn in a non-trivial project in assembler, so perhaps we could have picked it up.

When it comes to doing actual maths though, as in forming proofs rather than just calculating results I don't think any mainstream programming language helps one bit.


There is the problem that a i64 can't be represented exactly within a f64, nor a f64 within a i64, so you really do need to be aware of which type you're using, and the limitations of that type for arithmetic.

Lua used to be just f64, which is at least consistent, but then in 5.3 added hidden i64 as an optimization. So if you're just using integers, it stays on integers, but it converts otherwise. But ... guess what, if you add one to the last integer, it wraps ... but if it had got converted to a f64 at some point, it doesn't. So:


So now there is inconsistent behaviour which depends on how that value was produced, i.e. now everyone has to remember this little glitch, at least if there's a chance they might pass this threshold with their calculations. It seems to me that it's better to be in control of the type really.

Regarding x++, for a pointer-based language like C, it makes sense as it compiles straight down to operations in machine code (e.g. 68000 especially had auto-increment built in). But Rust is not pointer based, because pointers are really dangerous unless constrained. So the equivalent with a slice would be p= &p[1..] for example, i.e. it has to be written another way anyway to be safe. So I don't miss it. You don't need it so much anyway. Even if you do need it, it's just a little bit of boilerplate in the name of safety, and the safety is worth it.


If you ends up adding an integer to a float, you should stop and check how you came to this. This is not a normal situation, thus it should be painful. Only after appropriate check is performed and you really sure that a conversion is justified, you can add the explicit conversion.

P. S. Unary increment/decrement operator is EVIL. It is a target for abusing and a source of logical bugs.


Possibly related: