How would you define correctness and its importance in software design?

I come from a lax land of garbage collected, interpreted freedom. The term correctness emerges frequently in the Rust community. It is a new term to me and wanted to survey some definitions.

How would you define this term especially regards to working in Rust? I'm curious about your individual understanding the concept and its significance to designing and building software.

I've thought about correctness as akin to proofs and this Hacker News answer by jchw substantiates that.

1 Like

Correctness is just that: the software behaves correctly (i.e. does what it's supposed to do). One aspect of "doing what you're supposed to do" is not doing what's forbidden, and that's where Rust provides a big benefit: namely, under most circumstances, it prevents you from doing a class of forbidden things. This class is nowhere near including all or most possible software bugs, but it includes a very big set of bugs that are very common in other programming languages and often lead to catastrophic results.

Rust won't write correct software for you and won't solve all your bugs, but it can prevent you from falling into pits that you wouldn't even notice with C or C++.




I'm going to ignore the correctness of ones algorithm/implementation. If you get that wrong there is not much that can be done by a language, it's compiler or it's run time. As far as I know. If you used a "+" where you meant to use a "-" how is the compiler to know that is wrong? Yes there are ways to do formal proofs of correctness and such but I'm going to put such things on another, higher, level of abstraction.

When it comes to correctness in a programming language let's start with the simple and obvious notion that instructions/expressions/commands etc in the language should do what they are supposed to do or, if they cannot do it, produce a meaningful error/failure message. What a language should not do is just crash totally without any indication of the problem. Worse still perhaps it should not silently produce the incorrect result and continue running.


When I write "x = x + 1" then after execution x had better be 1 bigger than it was before. Not some incorrect random other number, not some overflow/wrap around result. If it overflows execution should halt and say so.

Similarly for all the other arithmetic and logical operators.

When I write "x = a[i]" then after execution x had better be the value of the i'th element of a. If there is no i'th element x should not be some other random number. The program should not just crash without indicating a meaningful reason why.

When I dereference some pointer like thing, x = *p, then after execution x had better be the value of whatever p is referencing AND that thing had better be properly defined prior to execution. Again, not some other random number and not crashing blindly.

Similarly dereferencing NULL pointers and the like had better fail in a meaningful way.

Conversely, when writing to a dereferenced pointer then some defined object it the program had better be written to, not some random location in memory that has no meaning. An action that may crash my program or worse still silently corrupt my data.

I could continue, but you probably get the idea by now. Then of course there are all the issues around memory leaks, resource leaks and sharing data between concurrent processes.

Now, as simple and obvious as this idea of correctness is, most programming languages in use today do not implement it. Which if you think about it for a second is an appalling, intolerable situation.

Languages like Java, C# and such go someway to achieving this correctness. At a huge cost in performance, timing determinism, memory requirements and other problems.

The first programming language I learned was ALGOL. A compiled language that enforced such correctness. Since then the only languages I know that do so is Ada and now Rust.

The biggest joke of all is C++. A hugely complex and sophisticated language system, with it's classes, inheritance, generics and now lambdas, closures, type deduction etc, etc, etc. Developed over decades. Yet at it's core built on C with all the same possibilities of incorrectness through and through. It's like having built a gigantic modern tower block on the rickety foundations of my grandfathers shed. Well, it would be a funny joke if it were not so tragically sad and disastrous.

In short, correctness in a programming language is simply: do the correct thing or halt with a meaningful reason why not.

No, I do not accept that all the documentation of such undefined behaviors in a language specification makes any such incorrectness in programs the programmers fault and therefore lack of correctness in execution is the programmers fault. We do our best to check for user errors in the operation of application software, why not view a compiler as an application in the same way and check for user (programmer) errors as best we can?

And this is why I'm here. Having had to deal with many languages over many years Rust is the first one since Ada that tackles the correctness problems whilst still being able to produce small and fast executables in the way C/C++ do. I can see this taking off in the embedded systems and IoT world where correctness, security, performance and size are all important.


Correctness: Never giving an incorrect response. (Classic inversion :upside_down_face: )

Write software that always succeeds, never fails. Which means you don't tell the user "Hmm. We’re having trouble finding that site." --Firefox (Also indecisiveness is bad.)

What you want including by default is a safety net. So the (Jr. Dev. :wink:) broken code gets found quick or at least does not propagate. Rust in comparison is reasonably good at this with compile time checks and various panics.

p.s. Life safety critical is a whole nother level.

1 Like

Correctness means just that: That it does the correct thing. Instead, I would put the focus on “enforcing correctness” rather than just the word “correctness”. Rust is all about enforcing that things are correct.

In some cases Rust requires a human to look closely at a module to ensure it is correct. However, once the human has verified that this module is correct, the compiler can suddenly make guarantees everywhere else that module is used. This ability to avoid having to do global reasoning is also an important part of the correctness concept.

Enforcing correct usage of user libraries.

A good example of a place the compiler enforces correctness is the typestate pattern, which is where you use separate types to encode the current state of a value and self-consuming methods to move between these states. The classic example is the builder pattern, but it can be more complex.

When using this pattern, you can make it impossible to incorrectly use the value, e.g. by using the value after converting it to a new state, or by calling methods you shouldn't call in the current state. These mistakes will be caught at compile time.

Restricting valid values of a type

Another nice example is how libraries can make guarantees that others can rely on. For example, you can restrict the allowed values of some type; see e.g. String and NonZeroU32. The NonZeroU32 type is defined like this: (with a macro)

struct NonZeroU32(u32);

In principle you could put a zero in there. However, the thing is that the u32 field is not public, so you can only do that if the library-provided methods allow you to. In this case there are three methods:

/// The value must not be zero.
pub unsafe fn new_unchecked(n: u32) -> NonZeroU32;
pub fn new(n: u32) -> Option<NonZeroU32>;
pub fn get(self) -> u32;

This is a very small amount of code, and we can easily manually confirm that it is impossible to put a zero inside it from outside the module (without unsafe code).

This is important. When I write some code, it is guaranteed that the value I got in the NonZeroU32 is not a zero. I can write code that relies on this and have a correct program. I can write unsafe code that relies on this and have a correct program. In this case we need a human to look the NonZeroU32 module code to confirm it is correct, but once we have done this, the compiler guarantees that the invariant is upheld everywhere else.

Unsafe code

When we write unsafe code in Rust, we tell the compiler to turn off some correctness checks, and promise to confirm them ourselves. For example, using unsafe code you could use the new_unchecked method (or transmute) to create a NonZeroU32 containing a zero.

However the thing is: When you write the word unsafe, you introduce an area that needs human checking. You are still not allowed to do wrong things, and the unsafe word is used to clearly mark that this is a spot that needs extra scrutiny.

You might think it looks like the compiler isn't enforcing correctness properly, and in some sense it isn't. But note that you don't need unsafe code to break this. You can also just make a mistake in the get function above and put a zero inside in some cases. This would involve no unsafe code, but still result in incorrectness.

What's going on here is that in Rust we accept that correctness cannot always be enforced. In some cases a human needs to check. The important part is that because of the structure of Rust, these manual checks are local, and the guarantees confirmed in these local instances can be relied on everywhere else.

Other languages

Of course, other languages also allow you to have stuff guaranteed by the compiler. The important part is how easy it is to make these guarantees, and how many of them you can make.


Correctness afaik, boils down to not allowing accidental step into the UB land. That means component of the language (type system, compiler) are UB free. Rust is pretty good at providing guarantees though the semantic of Rust abstract machine has not been fully defined and formally proven and is WIP. The other complication is when FFI comes into place and polishing the set of guarantees is one of the things UGC is focusing.


I have no idea what you mean by that. I defined what I meant by the term in my post. Sometimes one has to start by stating the obvious.

Quite the opposite to my mind. Far better to say "Sorry Dave, I can't do that" than to silently return the wrong result or just crash and burn. Where, in this case, "Dave" is the programmer, not the user of the finished program.

Yes, call it a "safety net" if you want.

Not just for junior devs though, for everyone. In my experience even skilled "senior" devs make mistakes that end up being very costly.

Indeed it is. Which is why we use Ada on flight control systems. But that kind of correctness in behavior of the language is only the start of trying to ensure correctness of the finished program.

Rust is a far better start in that regard than most other languages I have used.