Soundness of transmuting immutable references to mutable when the underlying data is truly immutable

I have no idea what you mean by this. The Rust examples are safe code.

1 Like

This has a reason. Wikipedia has both sides in one sentence:

North American traffic engineers first introduced this rule as a fuel savings measure in the 1970s, despite detrimental effects to the safety of pedestrians.

This is the kind of thing people want to see when they ask why a UB rule exists.

This is my guess. It could also be that Rust didn't want to commit to exactly noalias, in case LLVM changed things or a more restrictive instruction was introduced.


The unsafe operation is (for example) transmuting an & to &mut. The rules say that is UB all by itself, even if nothing is ever done with the result. A more complicated set of rules could (for example, I think) state that the conversion itself is not UB, but using the result in various ways is UB, without restricting optimisations.

Of course this change does restrict optimisations. Imagine something like EMS where first access to a region of memory is slow but subsequent accesses are fast for some time. In that case it would make sense to access value periodically to keep it “hot”. In fact it's easy to imagine something similar with a strange eager-swapout Linux module, too.

In that case “creation of invalid &mut is not an UB by itself” makes these optimizations impossible.

Almost any change to UB list makes certain optimizations inpossible. The question is always “do we care about such a strange optimizations or not”.

And here I would lift page from Bjarne Stroustrup book. He justified change to rules in C++ in comparion to C (you can not use case to bypass initialization of some variable even is said variable is just simple int) as “adding more complicated rules to the language just to ensure that developers would continue to be able to write incorrect programs doesn't look like a sensible thing”.

Here situation is the same: “creation of invalid &mut refrence is instant UB” is simple, easy to understand and verify, rule. You are proposing to replace it with something complicated and convoluted… why? What programs do you plan to enable by that change in the rules?

I am proposing no such thing! However I don't know the reasoning behind the current rules, and to some extent find the current rules a little puzzling ( they don't entirely fit my personal mental model of how computers and memory work - this mental model may of course be a bad one! ).

@scottmcm's first example shows how guaranteeing that a mut reference is unique enables optimizations even if you never use that reference:

This optimization relies on r being unique even if you never use it (because b is false).

1 Like

I don't know what the original reasoning was, but the "mutable XOR shared" rule makes it easy to ensure that references don't accidentally become invalidated. For example changing a Some(x) value to None would invalidate any references to the inner x value. There can't be any references around to invalidate if the one you have is guaranteed to be unique, after all.

The logical step from that to forbidding even creating a unique reference from a shared one is quite short, as I see it. Especially if simplicity is taken into account. Besides, it's a better start than having a more fine grained rule that turns out to not covered some edge cases.

1 Like

It's hard to say anything without you saying anything about where in your mode compiler resides.

The critical part of any model where UB may be discussed is the compiler. For some reason lots of otherwise knowledgeable “we code to the hardware” people like to pretend that only language and hardware matters. But that's wrong!

Compiler is a completely mindless (because it actually have no mind), crazy (because there are no sanity module), and patently insane (because there not “sanity” module linked in) agent.

Imagine foaming at the mouth, blood-eyes, crazy and stupid bull that is running around your arena. That's the only type of compiler we may ever produce.

Your only recourse is to map behaviors that shouldn't happen in your program after compilation to hardware (double free, data races, etc) to behviors that shouldn't happen in the source code. And then these thin yet sturdy limitations would keep rage of that mighty yet crazy animal from destroying your program.

And if you look on what everyone else says it's all about the compiler.

UB is not defined by the hardware or your ideas about how your program works. UB is defined by the need to make sure that crazy, insane, loony and yet very energetic compiler would act as agent of good not as agent of destruction!

And you don't even mention that critical component in desription of your “mental model”… that sounds wrong to me.

1 Like

Back to the start of our discussion. Note that answer in that language that we all love to hate, C++, is different: if you have pointer to immutable something and you convert it to pointer to mutable entity and then pass it to the function that doesn't actually mutate it… then it's all good! It's allowed by C++ standard!

How well that works in practice, with real compiler, though? Well, look for yourself:

void replace(char *str, size_t len) {
    for (size_t i = 0; i < len; i++) {
        if (str[i] == '/') {
            str[i] = '_';

const char *global_str = "the quick brown fox jumps over the lazy dog";

int main(int argc, char **argv) {
  const char *str = argc > 1 ? argv[1] : global_str;
  replace(const_cast<char *>(str), std::strlen(str));
  return EXIT_SUCCESS;

Compiler (not clang this time, but icc) is all too happy to destroy such program even if it's actually within what standard allows because that arena is too complicated and convoluted for this particular “bull” thus it smashes right through it's walls.

P.S. And yes, it's awfully close to these optimizations that are discussed here for Rust with &mut, only such optimizations are not, actually, allowed by C++ standard… yet compilers are doing them, anyway… they are just too tempting.

1 Like

Ok, in my mental model, it should be ok to "upgrade" an immutable reference to a mutable one, provided it is unique ( that is there are no existing references to the same object, and there will never be any such references ). But ( as I understand it ), the current rule says this is not ok, it is "instant UB", and I am not sure why.

I have no idea if this helps making this feel more satisfying for you, UB finally clicked for me when I figured out it was related to the distinction between a language and a language implementation, and in Rust particularly, what unsafe actually means. (I'm going to be a bit wordy about basic things, so bear with me for a bit ...)

A language is an abstract mapping from source text to a program, that is, a description of how to behave independent of any specific hardware. Notably, the mapping is partial: not all text is a valid program!

A language implementation takes a source text and (attempts to!) create a binary that implements the corresponding program for a specific platform.

The job of the language is to find a balance between easy to express what behavior you want for the programmer, and easy to implement efficiently for the implementation.

When we're talking about implementations, there's a lot of "what does this source text do on the platform" because it's easier to think about concrete things, but really, that's skipping a step: source text describes a behavior, and the compiler maps that behavior to the platform.

In (presumably correctly implemented) safe Rust, this distinction is pretty straightforward and can be ignored, any source text is mapped to some predictable output or it rejects the source text as not being a valid program.

The difficulty comes with unsafe: this doesn't mean that "you might do something the language doesn't like", so much as "you might write something that literally isn't defined by the language", in other words, that there simply is no mapping from that source text to a behavior, and therefore that an implementation can't produce a binary that implements that behavior.

In this view, asking why something is Undefined Behavior is always trivially answerable with "because the behavior wasn't defined", and while the obvious follow up to that pedantic answer is "why not", there's not really much to say in most cases other than "why should it?". Unless it makes the job of the programmer or implementation easier, making the language bigger by defining more input text as valid doesn't actually improve anything.

In this specific case, converting a shared to unique reference without buy in from the referenced type (by using UnsafeCell) makes the implementation's job unreasonably hard, and making it so you can do it only if it's "obviously safe" makes it almost entirely useless for the programmer (eg. because that means it's entirely local and therefore you can just get rid of the shared reference), or that the actual rules are far too complicated to be reasonable for a human to get right... and we're talking about unsafe code, which is already pretty unreasonably hard to get right with very simple rules like "no duplicate unique refs"


I found an interesting discussion here, this is part of it:

RalfJung commented on Sep 3, 2022 :

The mental model we teach is an overapproximation of the actual rules, which are still open. The rules proposed by Stacked Borrows already rule out some unsafe code patterns that are frequently found in the wild, meaning Rust programmers expect both less strict rules and even more strict rules. There is a fundamental tension here and we will end up meeting somewhere in the middle.

1 Like

Sure, check out tree borrows for what I suspect is at least somewhat a follow-up to that thread, and certainly these rules are more intuitive and nicer than stacked borrows: but they're also talking about local analysis, where it makes sense to exchange more complicated rules for more ergonomic code.

Specifically shared to unique conversions only makes sense as a global analysis (or at least escape analysis), which when you're talking about correctness is not desirable for anyone involved, honestly.


Which is obviously impossible. What happens if you call transmute to convert shared reference into unique one?

  1. Shared reference is Copy which means that as first step you are creating another shared reference and that copy of original reference is passed into transmute function.
  2. And transmute produces new unique reference which exist and is simultaneously with original shared reference.
  3. You couldn't say that “one reference have died” and “another is born” because lifetime elision rules don't work like that: reference which was pushed into transmute and reference that have been gotten back have the exact same lifetime and yet they are unrelated (the whole point of transmute is to convert between two unrelated types, if they are related you may just assign).

That's obvious, extra-blatant and egregious violation of rules about how shared and mutable references work.

And yet you claim that there wouldn't be any problems… why? Because you know that transmute is not a regular function and you know that it doesn't work like a regular function and in the generated code there wouldn't be any issues?

But that's exactly what I'm talking about: you are ignoring “an elephant in the room” (or that proverbial “crazy bull in the arena” I talked before) and go straight from the meaning that you have in your head to the machine code which you expect to see.

Compilers don't work with ideas in your head and compiler writers don't have a crystall ball which allows them to glance into the code you may (or may not) write in the future.

Yeah, and “we code for the hardware” guys usualluy say that it's nonsense: if we would map source code to the machine code then it would become “immediately obvious” how that “presumably undefined” construct behaves… but that puts the cart before the horse: we need definition of behavior because we have no idea if our mapping is correct or not and you want us to use that mapping that doesn't yet exist to define behavior? How would that work? Thus would make any and all optimizations impossible! After all one may transmute pointer to function into pointer to u8 and then examine the generated code.

At this point they usually accept the idea that compiler may miscompile “stupid code”… and this sends us back to square one because it's not clear how “stupid code” and “code with UB” differ. The most crazy solution that I ever heard in such discussion was to give the programmer means to define what “stupid code” means which, to me, sounds beyond crazy: we couldn't even make people accept certain definition of “well-behaving code” and now every piece of code would have to define what is “well-behaving code” and then, somehow, convince others to accept that definition when they would deal with that code? To me that sounds like a sick joke, but maybe if you do some heavy drugs this may even sound sane to you…

Depending on the situation it can be encapsulated so that uniqueness is guaranteed.

How would that work?

Consider the following program:

fn non_transmute(x: &i32) -> &mut i32 {
    let y = unsafe { core::mem::transmute(x) };
    return y;

Which is short form for this:

fn non_transmute<'a>(x: &'a i32) -> &'a mut i32 {
    let y: &'a mut i32 = unsafe { core::mem::transmute(x) };
    return y;

Is it sound? No: x and y have types &'a i32 and &'a mut i32 and that means that we have incompatible variables in our program.

Now, let's rename non_transmute to transmute and now we have the exact same problem in the original call to transmute, too!

You may object that such reading of the rules would mean that we couldn't even pass &mut i8 and get back reference &mut u8, which would be very strange, but here's the thing: transmute is semantically equivalent to a bitwise move of one type into another; it copies the bits from the source value into the destination value, then forgets the original. That means that it follows the usual semantic of move between two values: reborrow rules, variance rules and other such rules are in affect and relax that strict rule that unique references couldn't coexist.

But there are no such rules for unrelated types… and &'a i32 and &'a mut i32 are unrelated!

You don't need any special rules to make transmute from shared reference to mutable reference (or back) UB, on the contrary, you need special rules for transmute from &'a mut i32 to &'a mut u32 not to be UB!

You may express astonishment as to why such “an obviously correct” rules were not added to the language to solve the problem that started the whole discussion, but here the answer is obvious, too: this would make language more complicated and wouldn't enabled anything interesting, anyway. If someone takes &mut self, but doesn't then modify it then, according to the semantic versioning that means that said someone reserves the right to start modifying said object at any patchlevel update to such crate.

That means that the only way to use such crate is to either vendor it into your code or forking it… and if you are doing that then you may also just change the signature of offending method and this would make the whole exercise moot.

This is clear, thanks for spelling it out!

One thing that could be adding to the confusion is the term "instant UB". It gives the impression (at least to me) that, if the unsafe construct were allowed, something bad would happen instantly in the program itself. While in reality (in my simplistic interpretation) it means the language implementation cannot define/guarantee the behavior from that point onward, because there is no defined mapping to a valid program.


(Member of T-opsem, but speaking solely for myself, explicitly not for the team)

FWIW, while this is generally the case, an explicit goal of T-opsem is actually the opposite, and to improve things a bit here. Quoting myself,

Obviously this is just an ideal, and we can't really strictly enumerate all possible language UB and justify each one individually, and sometimes the justification is just "we want the flexibility."

In this specific case, the reason for immediate UB operationally comes from the abstract memory/borrow model (e.g. Stacked Borrows or Tree Borrows), and more specifically, from the "protector" operations done in the function prolog/epilog. These operations exist to protect code motion optimizations.

Or to rephrase, unlike with pointers, you can never create a mut reference and "only use it like a shared reference" (even if said "use" is merely to immediately discard it) because the mere act of moving the &mut T is a different operation not supported by &T.

actively misleading pedantry

Unfortunately, Gankra's "you cannot transmute &T to &mut, ever, no you are not special" is not strictly true anymore. With the advent of pinned self-referential futures, it's almost necessary that there exists a type where &T and &mut T are identical. The "almost" is that you can still make an argument distinguishing between "outer" and "inner" references. Both SB and TB treat "outer &mut T" and "outer &T" identically, but this isn't strictly necessary.

If you're dialed into the progress/discussion here, you might point to the difference between the "wide" and "tall" models. This isn't actually sufficient (I can produce an async where all witness table bytes are potentially a self reference) but is illustrative of what such an argument would look like (e.g. by adding a shadow byte to tag).

However, while it might, strictly speaking, not be immediate UB in some highly constrained edge cases, you still aren't special, and aren't allowed to do it, nor am I. Any attempt to use such an illegally produced &mut beyond the trivial non-usage would certainly be UB.

I believe this is mostly distinguishing between what we currently typically call "language UB" and "library UB." It (currently) isn't "immediate UB" to call Vec::set_len with a too-large length (the method body is entirely safe, after all), but we still call it UB, even though it's only "actually UB" once the length is used for something.

"Immediate UB" is, at least in the context following an explanation of UB as nasal demons, clear without need for further explanation, like the distinction between "language UB" and "library UB" needs. Plus, the lines between "library UB" in std and "language UB" are fairly fuzzy anyway, due to std's privileged position.


The problem is that name UB is just very misleading. It immediately leads to the idea that behavior is still there, it exists, it's real, it's just undocumented.

Which lead newbies on the erroneous path of trying to experiment with compiler to glean what that mystic undocumented behavior may be. And when they discover how their particular version of the compiler behaves on their examples they assume that now they have mastered that UB and can use it for fun and profit… but that's not how UB should treated!

UB = something that correct program shouldn't do, end of story.

Because precise definition of Rust is not yet done (and the same is true for C/C++, surprisingly enough) there are many contructs which fall into “we are not yet 100% sure if they are permitted or not”.

“Instant UB” just means “this particular operation shouldn't be done in a valid program”. Not “you may use it if you follow these three five rules then perhaps it would be allowed”, but simply: this if forbidden, end of story, that's already decided.


Ok! Then I would think of it as "unconditional UB".

1 Like