There is the problem that a
i64 can't be represented exactly within a
f64, nor a
f64 within a
i64, so you really do need to be aware of which type you're using, and the limitations of that type for arithmetic.
Lua used to be just
f64, which is at least consistent, but then in 5.3 added hidden
i64 as an optimization. So if you're just using integers, it stays on integers, but it converts otherwise. But ... guess what, if you add one to the last integer, it wraps ... but if it had got converted to a
f64 at some point, it doesn't. So:
> 0x7FFFFFFFFFFFFFFF + 1 -9223372036854775808 > 0x7FFFFFFFFFFFFFFF + 1.1 9.2233720368548e+18
So now there is inconsistent behaviour which depends on how that value was produced, i.e. now everyone has to remember this little glitch, at least if there's a chance they might pass this threshold with their calculations. It seems to me that it's better to be in control of the type really.
x++, for a pointer-based language like C, it makes sense as it compiles straight down to operations in machine code (e.g. 68000 especially had auto-increment built in). But Rust is not pointer based, because pointers are really dangerous unless constrained. So the equivalent with a slice would be
p= &p[1..] for example, i.e. it has to be written another way anyway to be safe. So I don't miss it. You don't need it so much anyway. Even if you do need it, it's just a little bit of boilerplate in the name of safety, and the safety is worth it.