I'm not sure I'm proposing anything really. That quote from my old boss comes from 1982 or so. It was a very different world. Slow processors, no float hardware, at least not on the systems I worked on, small memories, etc.
I have though about that quote over the decades. It is the case that at that time, on that system, for the problems we had, (it was a military, three dimensional phased array radar system by the way) floating point was not actually required. I have found that to be true on many projects since.
I think what it is really about is that one has to understand the problem one is trying to solve. What number ranges are involved, what accuracy do you really need, what accuracy do you actually have, what is an efficient way of storing and processing it all. And so on. Verses a tendency to brush all those problems under the carpet by using floats. Which works fine for most people most of the time now a days and costs almost nothing in performance and memory is copious.
Fixed point is still a rational data type rather than an integral one for this purpose. Fixed point is just an integer with an odd unit w.r.t. addition/subtraction, as well as multiplication/division with unitless pure count integers, but not for multiplication/division between fixed point numbers (even with the same fixed point), so it's a rational data format, not just a weird integral unit.
We agree on this point, actually — fixed point is often a better choice (and perhaps more often than not), but floating point is a very convenient format that works "well enough" for all but specialized cases.
If you're doing anything to the data other than storage, you need to track significant figures / error bounds separately anyway.
It's a convenience thing, honestly quite similar to floating point versus fixed point. The sign can be attached as part of the type (perhaps implicit), or it can be part of the data. I agree that (uNN, bool) is very very rarely what you want.
... two comparisons, just like with signed types? e.g. given n: u32, 273 <= n && n <= 373. If reducing to a single comparison, you want wrapping math and unsigned comparison, not signed math. e.g. n.wrapping_sub(273) <= 100. (The optimizer will certainly do this transformation for you if it's beneficial.)
All sensors I've ever worked directly with output an unsigned integer, where it's an interpolation between whatever the min and max possible readings are. I've used a couple behind an SDK, and every SDK I've used converts the reading to a floating point value in standard units.
I don't doubt some exist where they report with two's compliment, but I've not used any. Though I've not used beyond single digits of sensors, to be fair.
The size types are supposed to have enough bits to represent a pointer. Depending on your memory mapping, some pointers might correspond to negative numbers if you use a signed type. The article Rust's Unsafe Pointer Types Need An Overhaul - Faultlore explains conversions between pointers and integers and why it might need to be overhauled in Rust.
Rust constrains allocations of nonzero-sized types to be less than the maximum isize and it doesn't make sense to compare pointers to different objects so this isn't really an issue: any offset you need fits into either type.