On open/closed-world logic analogy to describe unsafe/safe

I feel quite flattered to have been nominated for last week's TWiR quote of the week, with several likes, and even chosen! Thank you all, especially @alice. :slight_smile: I saw several forum regulars like the quote (sorry, I peeped), but there was no discussion of the analogy itself. So I'm opening a new thread here instead of littering the TWiR quote of the week thread because I'm curious if others have more to express than a like. Here it is again for context:

There's a common trope among people unfamiliar with rust where they assume that if you use unsafe at all, then it's just as unsafe as C and rust provided no benefit. Comparing C's approach to safety vs Rust's is like comparing an open world assumption to a closed world assumption in formal logic systems [1]. In C, you publish your api if it's possible to use correctly (open world). In Rust, you publish a safe api if it's im possible to use in correctly (closed world). Rust's key innovation here is that it enables you to build a 'bridge' from open world (unsafe) to a closed world (safe), a seemingly impossible feat that feels like somehow pairwise reducing an uncountable infinity with a countable infinity. Rust's decision to design an analogous closed-world assumption for safe code is extremely powerful, but it seems very hard for old school C programmers to wrap their head around it.
[1]: https://en.wikipedia.org/wiki/Open-world_assumption

Is this just a cute quip or is there some potential explanatory power that could still be extracted? Is this analogy congruent on the maths/logic side?

I thought it worked pretty well because aiui logicians notoriously dislike switching between the the different assumption modes because it's hard/confusing/annoying (though I don't remember where I got this idea, so feel free to [citation needed] me on this). Notably, that reddit post had a large 48 comment response thread where the commenter was unable to perceive their open-world mindset they got from C which is exactly what my comment describes. I found this to be unreasonably amusing (and I restrain myself with great effort from adding a characterization of reddit commenters, lest I implicate myself), but also a little sad that I still failed to communicate the idea. Is there a better way to put this?

Rereading it a week later, I'm not sure the infinities part hit right. I think it does an ok job of giving the feeling of "impossible-sounding change in degree that becomes a change in kind" if you know about different kinds of infinities, but it doesn't help anyone that's not already familiar, and may be distractingly outlandish even for those that are.

Anyways that's already way too many of my words about my words.

Anyone else?


Yeah, I don't really know what you're talking about with the infinities, but the observation that Rust is all about building interfaces that are impossible to misuse, including for code that uses unsafe, is exactly on the mark.


I understood your analogy to mean that Rust provides a way to reduce the cognitive complexity of writing highly-efficient memory-safe code, especially concurrently-executing code. Without Rust that complexity is asymptotically exponential in some measure of the number of "moving parts"; with Rust it becomes more linear in a much-reduced set of "moving parts". In essence, Rust reduces such tasks from something beyond human capability to something that is feasible for a moderately skilled / experienced programmer.


That was the key summary line for me, and the open world / closed world analogy seems like a useful one. I know some skeptics, so finding ways to perhaps explain how Rust is safer even with unsafe is nice.

Incidentally, it's not just some subset of C veterans. I've encountered some articles pretty skeptical / critical of the tech industry generally calling out Rust under this premise. I've also encountered a number of people who feel betrayed(?) once they find out about unsafe, or that their specific bug wasn't prevented by safe Rust, or that Rust isn't a panacea for all bugs.


We like to think there are an infinite number of integers: 0, 1, 2, 3, 4....

Also that there are are an infinite number of real numbers, you know with the fractional parts including favourites like π, φ, e, etc.

One might think that the end of the story, but it turns out the set of real numbers has more members than the set of integers. That infinity of reals is bigger than the infinity of integers.

How so? We can imagine creating a list of all the reals. And we could label each of the reals in the list with an integer. Imagine an array of reals with the index as the label.

What we are doing is a pairwise associating every member of the set of reals with a member from the set of integers.

Turns out this is not possible, no matter how many integers we have, say an infinite number, one can always create a new real that is not in the list. Ego the infinities are of different sizes. Arguably it's not just a difference in quantity but a different in quality.

See Cantors Diagonal Method: Cantor's diagonal argument - Wikipedia

So I guess infogulch was just likening the possibility of doing something in Rust to the impossibility of counting all the reals.

Yeah, I don't understand it either.


I know cantor's diagonal argument - it was more of a way to say that I don't think it has anything to do with the open-world vs closed-world stuff.


Yeah, I don't get it either.

1 Like

I love how you wrote all that up just to conclude, nope, don't get it.


Which part? I think I have a handle on Cantor. Still struggling with the open/closed world stuff though.

Isn't Gödel's incompleteness more relevant? What we want is correct programs. It seems to be impossible to prove that our programs are correct in general. Rust helps a bit. Other languages, not so much.

Gödel doesn't stop us from proving that things impossible to misuse. What it says instead is that if all programs that the Rust compiler accepts are memory safe, then there must also be some memory safe program that it rejects.


Hmm, what?

I take that to mean that of the set of all possible memory safe Rust programs a Rust compiler can never be created that is capable of declaring all of them as memory safe. No matter how sophisticated we try make the borrow checker.

I presume there must be more than one such program Rust cannot prove safe.

That sounds terrible, we can never make a Rust compiler that works properly! No matter how we tweak the borrow checker or tinker with the syntax and semantics.

I'm pretty sure we can easily construct programs with multiple mutable borrows of some data and prove to ourselves that it is memory safe even if the Rust compiler cannot. In fact a lot of questions here from those new to Rust are about that, "I know my program is OK, why doesn't the compiler?".

That makes sense just from practical considerations of the difficulty of creating a compiler that can fully analyse a program, all it's possible behaviours, and the endless amount of time it might take.

I'm not sure how Gödel proves this. But never mind.

It's a bit more math-heavy than I would normally use when talking to programmers or software engineers, but I think your quote really resonates with Rust's core values.

Instead of using the infinities analogy, I would probably say that Rust's unsafe takes the square peg that is open-world and the round hole that is closed-world, and bridges this seemingly impossible situation using an adapter like this:

3D part containing a loft from a square cross-section to a round one

Don't feel too bad!

The way you communicate a concept is a function of your experience and the way your mind works, and sometimes that just doesn't match up with another person's experiences and mind. Think of all the times when you've tried to learn something and been confused, and it only clicks when you approach it from a different angle or read a different resource.

This gets even harder when the topic challenges something the person believes because you now need to actively change someone's mind, rather than just filling a knowledge gap.


What Gödel says (via the Curry–Howard correspondence) is that (at least) one of these propositions must be true:

  1. There are sound programs that the Rust compiler nevertheless rejects
  2. There are unsound programs that the Rust compiler nevertheless accepts.

(1) is obviously true; that's why we need the unsafe escape hatch. (2) is also true because there are bugs in the compiler (plus the escape hatch). But Gödel says that a theoretical perfect compiler that accepts no unsound programs must also reject some valid programs or, if we imagine a compiler that accepts all valid programs (one that could prove the soundness of vastly more complex lifetime scenarios than the current borrow checker), we must resign ourselves to the fact that such a compiler would also accept some programs that are actually unsound (even if no escape hatches are used).


For Rust, clearly the preferred alternative is your first proposition. That's why RustBelt, MIRI, and RUDRA are so important; RustBelt to provide the theoretical underpinnings, and MIRI and RUDRA to find aliasing and lifetime violations in unsafe code.

1 Like

I don't think the fact that Rust has and/or needs an 'unsafe' hatch because of its limited borrow checker makes 1) obviously true. At least not if we allow for some super duper 3500 edition of Rust with the theoretical best possible borrow checker.

I think we should think about these things without bugs in the compiler or the "unsafe" hatch. After all with those in the picture anything can happen. Surely mathematicians don't (knowingly) rely bugs in their proofs to prove things.

There is the part that disturbs me.

If we reject building or using a compiler that accepts unsound programs and go with one that will reject some sound programs, then we are forever reliant on an "unsafe" hatch to get some things done and dependent on humans to look at it and say "it's good".

BUT if whatever goes on in a human brain is logically equivalent to whatever a computer can do then we have just accepted that the human will accept unsound programs as well.

Or we are off into some world of mysticism where the human mind has some magic powers to determine soundness that our mechanical logic does not.

In this case, Gödel tells us that there are some programs that are

  • syntactically valid, and
  • do not invoke undefined behavior if run, but
  • are rejected by the compiler.

This does not mean that those programs uniquely fill gaps in the space of useful programs. Gödel does not say that we can't rewrite those programs into programs which produce the same output, or at least fulfill the same purpose, and are accepted by the compiler.

It might be easier to see this by looking at other languages. In Rust, we have the borrow checker to statically prove that certain programs can't dereference invalid pointers. In Java (here an example of a GC-based language I'm familiar with), instead, the garbage collector ensures at run time that no invalid pointers are ever dereferenced (outside of FFI) by keeping their referents valid while the pointers exist. This comes at the price of not being able to write many programs that do interesting things with pointers (e.g. there is no equivalent of a slice reference in Java), but that doesn't mean that you can't write programs to satisfy arbitrary (non-performance) requirements in Java.

Coming back to Rust, the borrow checker isn't a fan of doubly-linked lists, and that means you can't write a node with bare forward and backward references using safe code, but that doesn't mean you can't represent a data structure that lets you have a handle to a node and walk forward and backwards — you just need a different internal representation.

The choice is not between unsafe and not being able to get things done (except for FFI); the choice is between unsafe and having to redesign your program. We see this every day on the forum: “solve your borrow checker problem by rewriting this so it's more natural Rust”.

(The reason for the FFI caveat is because talking to C is fundamentally equivalent to using unsafe, since C has no memory-safety enforcement. It still doesn't constrain “write a program that serves this purpose” very much — for example, if you talk to a C program running in a separate process over a unix-domain-socket or other IPC channel, its unsafety can't affect your program and you can still cooperate with it — unless your purpose very specifically constrains the implementation, like “write a kernel driver”.)

There is an infinite space of programs that are correct (for some requirement) or interesting (by some property) but which the entire past and future timeline of humanity will never think of, just as there is an infinite space of mathematical theorems for which we will never find proofs, or even think of. But that doesn't mean we can't write other programs.

(It's also the case that we're frequently wrong about a program being sound, or a theorem being true. This doesn't affect the previous paragraph but it seems worth pointing out — Gödel puts no strong constraints on the abilities of a program analyzer that's wrong sometimes.)


Might I suggest the Veritasium video Math Has a Fatal Flaw? It explains Gödel's incompleteness theorem on a much more gradual and thorough learning curve than we can here.

At the very least, it drives home that anything that's Turing Complete must by definition be undecidable in the general case. (And that the halting problem is one expression of that fundamental undecidability.)

The concept of Turing Completeness itself arose from Turing's investigations into Gödel's claims.


Similarly, it's possible for an analyzer to have a three-state output:

  • Definitely correct
  • Definitely incorrect
  • Inconclusive

All Gödel says is that this third category cannot be empty. It might, however, consist solely of programs that no reasonable programmer would write.

1 Like