Is 'unsafe' code a good thing?

What do you mean by "single algorithm"? One could compose any arbitrarily huge and complex single algorithm from every other algorithm one has.

I'm not totally unfamiliar with Turing and Gödel and the like. But no expert by any means.

I don't want to by into such an argument. It's making an assumption "humans aren't algorithms" that is not proven, if ever it could be. It flies in the face of physics, computers and humans are just particles blindly following the laws of physics. It seems to introduce the slippery ideas of Free Will vs determinism.

I'm not suggesting such a checker is, in general, possible. I'm am resisting the notion that humans are any better at it than algorithms could be.

As a practical matter we don't have such a checker because:

  1. We have not figured out how to make such a checker algorithm. Never mind if it is actually possible or not.

  2. Even if we did it would take unacceptably long time to do it's job.

What is meant here is that although you can make it as huge, complex, and composed from whatever you want, you have to make a choice about which algorithm you are working with. In other words, if you pick a specific algorithm, then there is some Rust file that it will not be able to verify. On the other hand, there is no Rust file that every algorithm fails to verify.

One place where this comes into play is randomness. These theorems assume that the algorithm is fully deterministic, and unless you hard-code some sort of pseudo-randomness seed in the algorithm, you have not picked a "specific algorithm" in the sense used here. (If this seed has a bounded size, you can just try every seed. You are allowed to combine as many algorithms as you want after all, as long as there are finitely many.)

As for humans; if you want to apply these theorems to us, you run into the issue of "single algorithm" again. You can't just say "humans", because that is not a specific fully deterministic algorithm, and the theorem only applies once you have chosen a single specific algorithm. If we assume that the universe is deterministic or whatever (and that humans eventually go extinct), we can make some argument where our single algorithm loops through every nanosecond in the universe, and for each instant, loops through every person alive at this moment and sees what they would do. However this runs into some trouble with non-obvious assumptions about physics, continuity of time, bounded lifespan of humans, the fact that we also have to verify the answers given by humans somehow, as there are probably a lot of them who give the wrong answer, and figuring out what to do if every human gives up trying to solve it.

The above is not an argument that humans are better than algorithms; rather it is an argument that the halting problem does not guarantee that humans are as bad computers are.


I find it just as unproven and debatable to say we are nothing but particles without souls as it is to say we "humans aren't algorithms". So it's a draw - two competing unprovable philosophies, leaving us with nothing but our free will to help us decide which to choose. :slight_smile:

A very successful programming language inventor once said "Design and programming are human activities; forget that and all is lost". The bottom line seems to be that there are things very difficult for algorithms (designed by humans) to do that are easy for humans, and visa versa. The beauty of Rust is that I don't have to use my human mind as a blunt instrument trying to track pointer lifetimes, and can instead use Rust as a tool for that and use my mind for something it is better suited for. (In my case, I'm using Rust for particle simulation.)

Circling back to the original post - personally I wouldn't use a library that requires me to use unsafe because that defeats my purpose for using Rust in the first place. I want Rust to prove the memory safety, not me.

1 Like

Ha! Touché.

I invoke occam's razor. You are introducing the concept of "soul". Something nobody has any evidence of existing. And that it can have some as yet unobserved effect on physical matter.

Back on topic. Totally with you there. I'm all for the computer doing all that bookkeeping grunt work. That is why we created computers is it not?

With the caveat that just now, for me, it feels like I spend more time trying to convince the borrow checker to approve of my code than I'm saving. But the payoff of course is that the borrow checker is far more likely to be right than I am :slight_smile:


I am so tempted to point out how occam's razor results in circular reasoning in this case. And provide the evidence. But if I did that, mods would probably object about "off topic" or something. So I will abstain. :slight_smile:

You and I are in the same boat here as far as seeming to take a long time to satisfy the borrow checker. Which is concerning to me. I first went through the Rust book a year ago, and have been programming sporadically in Rust in my spare time since then. From your previous posts it seems like 1) You have been doing Rust a little bit longer than me, 2) you use it for your day job on a regular basis, and 3) you are really smart. And yet like me you are still struggling to feel productive.

I'm still hoping for and looking for that breakthrough where I "get" the Rust way of doing things and am comfortable with it enough that I feel more productive. When I started using Python (20 years ago) I felt an immediate productivity boost compared to C++ and Java. But now, using Python on larger projects with larger teams, Python isn't feeling as productive as it used to be. I recently spent a major part of a code review pointing out to a fellow developer that the function he wrote gets called with a parameter of the wrong type. He claimed otherwise, then I had to prove to him that function A calls function B which calls function C with a parameter of the wrong type that originated in function A. At that point I wondered, where did my Python productivity boost go? I'm ready to have a compiler start checking the code again. Others have noticed this problem and have started bolting on optional static type checking to Python. I'm trying Rust out as an experiment to see if it is viable option for creating reliable components that have good performance and memory usage - but I can't recommend it to my team until I start feeling productive with it! How long is that generally taking people? I'm interested in hearing people's stories. (Maybe that's a new topic.)

1 Like

Feeling more productive and actually being more productive are two quite distinct things. I still have a lot of back-and-forth with the compiler, but I’ve stopped being shocked when compiling code works correctly as soon as it compiles: I don’t necessarily get the emotional hit of feeling productive, but objectively I seem to be getting more done.


The whole discussion of Rust productivity may call for it's own thread.

Well, productivity is not about a single programmer churning out a thousand lines of code in a day and then being proud of his amazing productivity.

To my mind when you think of productivity you have to think of the whole software life cycle, designing it, writing it, documenting it, deploying it. Then comes all the fielding bug reports as it keels over in use or spits out random results occasionally over however many years it lives. Then what about future feature enhancements, how easy are they to put into place, how many more bugs do they introduce?

And likely that is not just one programmer but a team. A team whose membership may change over time.

I would not read too much into my experience. I started looking at Rust a year ago. I have by no means been at it full time, far from it. I have been diverted by hardware design and then we still have software in Python and Javascript around here.

However what I put in place last year has been spinning along with no issues. A Rocket web server with RESTFULL API, a web socket server, a decoder of endless streams of horrible proprietary binary protocol, interfaces to NATS messaging and CockroachDb. No mysterious random outputs, no crashing in the night with memory exhaustion. No wasted weeks trying to find some obscure memory or race condition issue.

That right there is your productivity. It does not cause problems and waste your time after you have written it :slight_smile:


Your experience of "When it finally compiles, it just works without problems and is (comparatively) easy to refactor should the need arise" is common among Rustaceans. Which brings us back to the subject of this thread: "Is unsafe code a good thing?"

For me that answer has multiple parts:

  1. unsafe code is necessary to implement many of the safe foundational abstractions that Rust offers within std and a few other crates. IMO this is a good thing.
  2. unsafe code is necessary to work at the bare-metal level in embedded systems or, for crypto, to avoid timing side-channels. IMO this also is a good thing.
  3. unsafe code is necessary to interface to all those other languages that are inherently unsafe (e.g., C, C++). IMO this is unavoidable; such interfaces are just an extension of the unsafety of those other languages.
  4. Judicious use of unsafe code is sometimes called for to improve critical "hot paths" in high-traffic or time-critical code. IMO this is unfortunate but understandable.
  5. unsafe code is often used in what amounts to premature optimization. IMO this is ill-advised and completely avoidable.
  6. unsafe code is often used in attempts to circumvent the borrow checker, often – though not always – resulting in UB. IMO this is [Edit: completely usually] avoidable.

One way of avoiding unsafe code might be to make use of RwLock, which I've found very helpful when the borrow checker feels too restrictive.

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.