What counts as undefined behavior

How about we keep the 257th byte internally for rust to mark dead code, but do not assume that uninitialized data has value 256? Instead let's assume it's value is in range 0-255 but is unspecified.

As for you code example, here's it's spec

// precondition: 0 <= x < MAX_INT
// postcondition: true
int set(int x) {
    int a;
    a = x;
// precondition: 0 <= y < MAX_INT
// postcondition: 0 <= \result <= MAX_INT
int add(int y) {
    int a; 
// assert 0 <= a < MAX_INT
    return a + y; 

int main() {
    int sum; 
// assert 0 <= sum < MAX_INT
// assert 0 <= sum < MAX_INT
    sum = add(3);
// assert 0 <= sum < MAX_INT
    printf("%d\n", sum); // the spec says nothing about this being 5

Compiler is free to do put whatever value it likes in any of the a's. The axioms of Hoare logic do not allow me to deduce anything interesting about sum.

I don't see why. The uninitialized variable is not shared between threads. Compiler should not modify the value of uninitialized variable after it has been read, because otherwise that would violate the empty statement axiom of Hoare logic. This value is not shared in any way.

int u; // uninitialized data
int x = 3; // here compiler could optimize by reusing memory location u to write value of x
int y = x + 5; 
f(u); // here u is read and from now on compiler cannot reuse u it to save memory or overwrite it in any way 

Moreover, any code you could possibly image that has uninitialized data, could just as fine be replaced with a call to a rand() function or something like this and then compiled "as usual". If you can compile code with rand() then you can remove rand() and leave the data uninitialized. The postcondition of rand() is that the value can be anything (let's also assume we do not call rand() on things like bool or fn or enums, but only use it on datatypes for which all binary representations are valid values, like u32). Then any proof of correctness that worked with rand() will also work with uninitialized data.

And there are two reasons why this is important. First is that we are actually using it all the time.

let buffer:*mut u8 = allocate();
let length = unix_read(buffer, "some/file/path"); // system call
print(buffer,length); // oops, formally (according to formalism that uses 257th byte) this is uninitialized, isn't it? How can compiler know what unix_read did?

Second reason is for sake of keeping consistency with some formal reasoning. After all, if you care about optimizations more than logic, then why don't we compile all programs to just

int main(){
    return 0;

This code is blazingly fast. If we don't care about having decent formal foundations, then we might as well throw out all rules and ignore what programmer writes.

By the way, here

I was broadly referring to code that can be proved correct according to Hoare-like formal system of reasoning. To prove things you need to deduce postconditions based on axioms and preconditions of each statement. The predicate like "was x written to" is meaningless in such proofs.

Assuming it has any particular value is painful, though, since it requires that said value be consistent, and thus that the compiler preserve it. Given a = undef;, that doesn't mean that a - a is zero. See https://llvm.org/docs/LangRef.html#undefined-values for more.


Compiler is free to modify uninitialized value and do any optimization that it could with undef as long as it is not read. After it has been read the value has to be preserves just like for any other variable. It's not painful. It's just becomes a variable like any other. Reading uninitialized variable is like initializing it with random value.

So what? Why should Hoare logic hold for undefined variables, but nothing else should?

Yes, hoare logic is not enough to reason about that crazycode… but hey, I can change your preconditions and postconditions:

// precondition: 0 <= x < MAX_INT
// postcondition: x is stashed in the stack
int set(int x) {
    int a;
    a = x;
// precondition: 0 <= y < MAX_INT; x is stashed on the stack.
// postcondition: x + y returned
int add(int y) {
    int a; 
    return a + y; 

int main() {
    int sum; 
    sum = add(3);
    printf("%d\n", sum); // the spec this would be 5

Compiler should preserve that semantic.

I have shown why.

Because compiler doesn't know anything about that syscall it would assume everything is initialized. Or it may know that unix_read initializes first length elements.

In both cases there are no UB and everything works.

Because this code wouldn't be correct for the programs which don't trigger UB. Such programs exist.

We do care about these. Any program which doesn't trigger UB should work according to the specification, program which does trigger UB can do anything.

Why? Just do what the compiler does and you can easily reason about results. Just ensure that undefined value is never read.

Yes, it's not impossible. But this complicates logic and you still haven't explained what you want to achieve.

Let's replace these rules which I don't like with another rules that I like is not a good enough justification by itself.


I really don't understand how you can say stuff like this, and then go on to say "if we ignore the formal foundations that Rust actually uses, then it usually still works fine", as you are doing here:


Again, this is actually a thing which you can do --- you can freeze an undefined value into an arbitrary but consistent value.

As much as it might not be immediately obvious, inserting an implicit freeze on each memory load is not free. This is something that much thought has been put into, and despite the fact that it causes "miscompilations" of "obviously correct" programs, no optimizing compiler treats pointer loads as freezing the loaded bytes.

The paper I just linked is the one which first introduced the concept of freeze as a way to tame undef into an abitrary-but-consistent value. I suggest reading it, or at the least looking up other discussions of the freeze operation.


Please show me formal spec of rust language. This is what C's formal spec looks like

Rust's formal spec is rustc. That's why I'm asking you guys. You know the internals of rustc much better than me. I'm just trying to imagine what a formal spec of rust would look like if there was one and I think I found a hole to poke in it.

Rust definitely supports Hoare logic for all programs that don't trigger UB.

The question is why should it do anything for some programs which trigger UB while simultaneously it's Ok for it to ignore some other programs (let's call my example crazycode).

Why should it be allowed to break crazycode but not your programs?

Rust does not have a spec that is complete, but it isn't incomplete in ways that matter for this discussion. MiniRust is one such incomplete spec, and you can find it here. It defines abstract bytes here, and it talks about whether values are valid here.

From what I've seen so far, I do not agree at all that you're poking at an actual hole.


In a sense, Miri could be considered a partial spec of the language, in the form of an interpreter that either executes the program if it doesn't have UB, or eventually throws an error if it does have UB. Of course, it's far from perfect, since it disallows many syscalls and doesn't detect certain cases of UB that the compiler exploits. But its validity model (similar to MiniRust's) is entirely sufficient for reasoning about initialized vs. uninitialized values.

1 Like

I think we are overconcentrating on one small example which is basically “compiler vs Hoary logic”.

Maybe it would be better to discuss “hardware vs Hoary logic”?

Note this:

One platform which exhibits UB in hardware, believe it or not, is venerable IBM 5150.

Specifically this, original, model which used 8086 and, optionally, 8087.

It had very interesting property: it was a comprocessor, but it was attached directly to RAM and not to CPU itself.

Which means that if you do a type punning and write something like:

int foo(float x, float y) {
  float res = x / y;
  // Some-other-code
  res = x * y;
  // Some-other-code
  return *((*int)&res);

Then result would be truly unpredictable: calculation of x / y takes some time and // Some-other-code also takes some time, but it depends on timings of DRAM thus it may easily happen that on one system you would get x / y while on other systems it would be x * y (and you may even end up with mix of these).

If you would do what the spec asks you to do and rewrite it like this:

int foo(float x, float y) {
  float res = x / y;
  int intres;
  // Some-other-code
  res = x * y;
  // Some-other-code
  memcpy(&intres, &res, sizeof(int));
  return intres;

Then compiler would insert wait before memcpy and result would be become deterministic.

Do you want to say that the fact that result of original code contains UB (and is truly unpredictable on hardware level) mean we should fill the code with waits and if we wouldn't do that then we lose the ability to reason about our programs?

Note that if you read res more than once you may end up with that crazy res <= 120 || res == 120 || res >= 120 becomes false result without any “help” from the compiler. Simply because that's how hardware behaves.

Should we declare such hardware platforms illegal or what should we do?


You seem to be under the impression that volatile reads and writes form a meaningful synchronization primitive. That is false. Any attempt to use volatile operations to provide thread safety guarantees should be considered undefined behavior. You can't do it in C, and you definitely can't do it in Rust. Volatile provides no guarantees as to how non-volatile instructions are ordered, nor does it provide guarantees as to the atomicity of the operations themselves.

Pre-C11, you could get away with it, because the C virtual machine is single-threaded and compilers were kind enough to pretend volatile was atomic, but it was undefined behavior then, and since C11, which creates the threading model we all know and love, explicitly undefined behavior. If you need synchronization, use atomic operations, not volatile ones.


Your mistake is assuming a trivial, direct translation from Rust source code to machine code.

It's not the case that reading memory in Rust always corresponds to reading bytes from RAM in the hardware. It's more complicated than that.

You cannot so easily separate what happens "at runtime" from what happens "at compile time". The compiler does a lot of things at compile time that in the naive translation would happen at runtime, including "reading memory".

The fact that there are no invalid bytes in the hardware RAM doesn't help because reading memory in Rust doesn't always involve the hardware RAM.


Are you saying Rust does not follow Hoare logic? What does it follow then?

You would have to formally define what "stashed in the stack" means. Formal spec of C for example, does not give any guarantees about stack, hence you can't use it for reasoning like that.

I'm sorry, I think I missed it. Could you elaborate?

So I can write a foreign function call trust_me_bro() and call it every time after declaring uninitialized variable and this way use uninitialized memory? Then rust compiler actually already supports handling of uninitializzed memory. So why not make it part of spec explicitly, in such case?

Correct according to what formal system? Hoare logic? With what axioms? The Miri and MiniRust seem interesting though. I see its written in Rust (not as great as Coq, which explicitly states all definitions) but I suppose it's better than nothing.

I care too, so let's care together about exactly what should be stated ass UB and what should not.

Suppose I have a Hoare triple

{x was written to; x is int} y = x+3 {P}

what can you tell me about postcondition P?
Or look at this

{x was written to; x=3 ; x is int} y = x+3 {P}

clearly then P must be y=6. But the precondition "x was written to" is useless, because I already know that x=3 so obviously it was initialized. It's just like Any type which has not been added to haskell because it's analogically useless.

I want rust to have less undefined behavior. Isn't it exactly why you guys like rust over C? Because it supposedly has less undefined behavior than C?

That's what I'm arguing about. I'm telling you that this should not be UB and there is no reason for making it UB. Though @CAD97 's point is the most convincing argument I heard so far.

As for hardware UB

I'm not a hardware expert and I do not know what the solution should be to address this problem but I'd say that the job of high-level languages is to find some formalism that would abstract the technical details and allow programmer to reason logically about the program behavior. If you want to say that reading uninitialized memory would somehow make it impossible to control hardware, then fine. You win. But as far as I see, there is freeze mentioned by @CAD97, so it seems to be doable, though maybe at some performance cost. I suppose that's a fair justification.

I was saying that the uninitialized memory is not shared between threads. Why would it? It's just a variable like any other and borrow checker makes sure there are no race conditions. You say that allowing reading uninitialized memory would imply that all variables are volatile. I don't see why. Especially since there is freeze which works just as I would like all uninitialized reads to work by default.

If we go by formal spec and that's enough justification then your whole jihad immediately becomes pointless: formal spec for both C/C++ and informal for Rust all say that an attempt to read uninitialized memory is UB, end of story, there are nothing to discuss.

For purposes of our discussion it can be defined. We know how C compiler worked 40 years ago, we can invent a way to define it. The question whether that's good idea or not is another one.

I will repeat once more, it's not hard:

  • If you say “formal specs says it's invalid program and that's enough” then we don't have anything to discuss: reading undefined value is UB, means program is no longer valid, means we can not reason about it, case closed.
  • If you say “formal specs doesn't say about anything what happens if you read uninitialized memory, but we know how things actually behave” then you have to explain why this exact reasoning doesn't make my crazycode valid.

Indeed. The rule is: any program which doesn't exhibit UB should work. And obviously if you call function or syscall which compiler couldn't access compiler should reason conservatively.

It would work till someone would enable WPO or LTO, of course.

Because you don't need any justification to do nothing. Have you ever seen anyone who discusses questions like “why are we not building this bridge” or “why are we not sending sending $1 to everyone on Earth once a year (just $8 billion, not that much)?”.

Status quo doesn't need any justifications. Changes to status quo need justifications. I'm yet to see any.

Hoare logic, sure. You just have to ensure that this your code doesn't ever try to read “undefined” value. That's, essentially, what Miri does.

In Rust that's defined as either two's complement addition or y is x + 3 or panic! if it's larger than MAX_INT. Final choice depends on compilation options.

Rust really tries hard to ensure that there are no UBs in safe Rust.

In unsafe Rust UBs are permitted but attempts are done to ensure that they can be reasoned about to actually write code which avoids UB.

It may be useless for Haskell, but it allows Rust compiler to optimize code better, that's why it was added to Rust. Well, it was added to C and then to C++ and from there it arrived in Rust, but the idea is still valid.

Haskell doesn't care enough about performance to do something similar, I guess.

No. The idea wasn't to reduce UB simply because we can. If that was the goal then it's possible to eliminate UB completely (like Java and many other languages did). But that makes certain things impossible and that's why Rust picked different road.

Rather then try to eliminate as many UBs as possible idea was to control amount of unsafe code where UBs are possible and ensure that it doesn't make it impossible to write correct programs (like it happens with C/C++ where practically every existing program includes tons of UBs and works simply by accident).

You haven't demonstrated why your proposal would make it easier to write code while simultaneously avoiding UB.

Tracking reads from undefined memory is relatively easy in Rust. Easier than in C/C++.

It's already UB. There are no need to justify that (although there were some explanations but they are for past decisions).

To change status quo simple desire to do so is not enough. More-or-less by definition.

Rust found it. It's called unsafe. If you write normal code then you are guaranteed that there would be no UBs. But if you need something more peculiar, yes, you have to read the rules.

No. I'm saying that if we would allow reasoning (which you emplyed) “spec doesn't allow that, but we know how things work in reality” (which would have justified removal of 257th possibility because hardware only allows byte to have 256 values) then all variables would become volatily (necessity to allow things like crazycode).

And if the reasoning is not “spec doesn't allow that, but we know how things work in reality”, then I want to see it.

I've asked about half-dozen of times about why would you need to change the spec and the answer was “why the heck not”. That's not a good answer.

And if we reject the but we know how things work in reality reasoning, then there are literally nothing left!


To elaborate on this more: every local variable read in rust is emitted as a read from memory. LLVM then translates that to registers where possible.

So "only optimize when it's not read from memory" is "don't optimize", in practice.


Sorry, you brought up threads when someone mentioned volatile, which I took to mean that you had the belief that volatile had any meaningful constraints in the face of multiple threads.

Anyway, it's worth noting that currently there isn't a freeze operation in rust. The only way to freeze uninitialized memory is currently to write a known value to it, at which point it becomes frozen to that value, unless it gets otherwise deinitilized.

The big purpose of letting initialized memory be anything is for optimizations. The compiler can and does use uninitialized memory as free space it can do anything to. Losing that ability would cut down the compiler's ability optimize your code.

1 Like

By the way, this is not undefined behavior. This is nondeterministic behavior. The function can either return result of x/y or x*y but it won't crash or burn or anything like this, right? Undefined behavior is the matter of function domain definition

If you want to see what really undefined behavior in "hardware" would look like consider Turing machine with tape that has beginning but has no end. There are two ways to define such machine. If you are at the beginning of the tape and go back, (1) you stay in place or (2) the result is undefined. The second approach is an example of UB in "hardware". Because the operation of "going back" is a transition and transitions are defined by transition function. This function is defined on the domain of tape "addresses" being natural numbers. So going back is like asking for a natural number less than 0.

In other words, UB is the matter of definitions and preconditions. And I'm arguing that (1) reading uninitialized data could be made into a defined operation and (2) the current definition of invaraint "uninitialized data cannot be read" is a tautology in this formalism of 256-valued bytes that Rusts abstarct machine proposes.

It's also easy to show how those affect optimizations. To use BF syntax, under (2) you can always optimize <> to nothing. But under (1), <> might be nothing and might be the same as >, so you can't optimize it.

(In either case you can optimize >< to nothing, since the right end is infinite.)


If you would use res in a few different expressions it may crash and burn. Especially if you do something like

    *((int*)res) &= 0xF;

Yes. It's possible. But I'm yet to hear why it's desirable.

Rust doesn't mandate 256-valued bytes. Neither in abstract Rust machine nor in actual implementation.

Miri's bytes are not 256-valued, e.g.

CHERI bytes are not 256-valued either (they don't help us to deal with uninitialized values, but with pointers, but they are not 256-valued, that's important).

I have no idea where you have pulled that idea about 256-valued bytes.