Aliasing rules and mutable pointer dereference

I also found this post very interesting in regard to understand the difference of pointers vs integers better:

Pointers Are Complicated III, or: Pointer-integer casts exposed

So if I understand that post right, then it is proposed that doing a cast from *mut T to usize (opposed to pointer::addr) will have a side effect (in the abstract/formal model) in future. (But still in the process of trying to understand it all.)

So it is pointer provenance (of p) then, which makes *r2 += 1 go through r here, right?

It looks like a pretty unmistakable direct pointer lineage, to me.

"x begets r begets p begets r2."

You can think of these as all raw pointers or all exclusive references, if that helps clear up any confusion. Mixing pointer types with that example isn't really doing any favors.

That's not quite correct. All mutations of the pointed-to region of memory must go through pointers which are derived from that &mut. Basically, you are allowed to do reborrows and reference-to-pointer casts. You cannot just conjure a new pointer to this memory, nor can you have a raw pointer which existed before the creation of &mut, but is used for accesses afterwards.

The simplest model works like this: each pointer (both references and raw pointers) has an associated contiguous region of memory that it is allowed to access. This region can shrink, but can never be extended. So for example this code is UB:

let arr = [0u8; 10];
let slice = &mut arr[0..4];
// UB, you cannot extend the pointer's provenance
*(slice as *mut [u8; 4] as *mut u8).add(6) = 1;

Thus your pointers form something like a tree, where the pointers with smaller provenance are children of their parent pointers.

The aliasing rules for &mut basically say that if you make a memory access through a &mut, then all its child pointers are invalidated and can never be used again. What exactly counts as an access is a bit subtle and in flux. Writing is definitely an access (although there is a question whether non-overlapping accesses should be allowed). Reading is likely an access, but there is a possibility that your &mut gets downgraded to &, instead of just invalidating all child pointers (so that writing through child pointers is now forbidden, but reading is allowed). Consult a specific model and Miri for details.

Reborrowing also likely counts as an access, including implicit reborrows in function calls. Casting a &mut to a *mut is also an access, Stacked Borrows basically inserts an implicit reborrow in each reference-to-pointer cast. I believe that's not the case with Tree Borrows, but you should check the article for details.


It's undocumented. The sad truth is that Rust doesn't have a memory model, but even saddest truth is that C and C++ don't really have memory model either.

C and C++ standards do include certain rules which are supposed to represent memory model but it's known for two decades that these rules are incomplete. The questions about these soundness holes were raised more than two decades ago, yet they still are not answered and compilers still are not consistent and differ in what standard-compliant programs they break.

The solution that Rust presently employs is similar to what Rust does with safe and unsafe Rust: separate possible program not into two, but three separate groups. This is safe group, this is unsafe group and we'll decide it later group.

I just wish that was actually added to the standard. Because having explicitly-outlined grey area is still better than having some text which describes something that doesn't exist and compiler which doesn't conform to any written specification.

1 Like

Nit: it's "this is unsound" group rather than "this is unsafe". "Unsafe" in Rust terminology just means "not checked for soundness by the compiler", while "unsound" actually means that the program violates memory safety (perhaps conditionally).

Rust has better than a standard. Rust has Miri, which lets you actually run your code and check that it doesn't violate the memory model. Also, there are many clear-cut cases where the programs are definitely sound or unsound. Unfortunately those groups are currently too small.

Regardless, having the memory model just documented isn't very useful in practice, where you may have many thousands (or even millions) of lines of complex unsafe code. Validating even the simple rules manually at that scale is infeasible.


Is Tree Borrows allowing a strict subset of what Stack Borrows allow, or vice versa? Or is none of those allowing a subset of what the other allows?

What's currently the "standard"? Which of those (if any) can I "rely on" when writing unsafe code?


Okay, so the problem goes beyond Rust, but at least…

…Rust has a reference implementation (Stack Borrows, right?) that I can use to check some code with.

I would really appreciate if any grey areas (or "undecided") areas were documented as such (in something that's normative, and even better where it's easy to find it). But I also see how this is a lot of work, and I understand if that work is better put in other things at the time being. Focussing too much on documentation might slow down the development process and research.

Neither. This isn't a hard read and points out various places where one is stricter than the other.


I don't think either is strictly a subset of the other. They treat various edge cases differently. Quote:

compared to Stacked Borrows we gain heredity information (lossless parent-child relationship) but lose chronological information (between two pointers both derived from a same parent, Tree Borrows does not keep track of which one was created first). In fact, this loss of chronological information enables optimizations that Stacked Borrows does not, since it allows reordering reborrows.

However, there is a certain "naive safe Rust" model which I have basically described above, i.e. "working with raw pointer the same way as with references". This one is very restrictive, but will certainly be supported by whatever model is finalized. If your usage of unsafe code is basically C FFI here, unchecked array access there, and a few pointer casts, then you will be fine. If you want to write high-performance data structures or write self-referential structures, then the current models are insufficient.

(Currently a member of T-opsem but speaking only as myself, not for the team. Exact numbers pulled out of the aether.)

If you want to be 100% guaranteed that your code is sound, your best bet is to stick to safe code and unsafe APIs in std which clearly document their unsafe preconditions. At a 99.9999999% confidence you can include calling externally linked functions with their correct signature.

The 99.9999% confidence "simple" model for pointers is that of references, but with an erased lifetime. If you can identify when the "lifetime" of a pointer is derived from its parent reference (this is always when it's created) and when that "lifetime" ends (this is always before the parent reference lifetime is invalidated, but may be sooner, depending), and all uses of that pointer fall within that "lifetime" and themselves follow a "well nested" reference access pattern, then the use of the pointer is valid.

There's a lot of edge cases in both SB and TB (and in their overlap) I'm not confident in saying are probably sound until the project commits to the details. But you're probably not going to run into any of them without being aware of them ahead of time[1]. But if you're not running into those, I'd give any execution which passes Miri in strict provenance mode about a 95% confidence for being sound in perpetuity, and about 80% if it does, before further argument of matching known desirable sound patterns. (Ignoring the synchronization of threads, since I don't know how thorough Miri's model is there.)

As developers we're often quickly pulled to invest time and effort in making an ideal/optimal solution. With how Rust exposes costs, it lends itself to a sort of "golfing" of those costs (allocations, etc). But the magic of Rust, to me, is that while you have the opportunity to write "perfect" code, you don't have to; you can always stick to safe code, accomplish your task, and be sure that there's no chance of UB[2], just perhaps some extra runtime checks to ensure that.

And that's the best model for unsafe code to be written under 90% of the time. Pick some safe runtime-checked abstraction, and stick to its rules. To satisfy borrowing, think like an unchecked RefCell or RwLock; if your access pattern would work without panicking with runtime checks, it's (probably) acceptable without. (I'm actually kinda surprised that there doesn't seem to be a "split" lock/cell crate available... though I suppose it's essentially just RwLock<&mut T>. A debug-checked option which only locks when #[cfg(debug_assertions)] would be quite interesting, though...)

  1. The big two are how exactly UnsafeCell and !Unpin behave, especially w.r.t. being "infectious" and how strictly they're tracked. ↩︎

  2. No chance which is caused by your code, anyway. There's always the chance of an upstream bug which could cause UB in any language which isn't fully sandboxed. ↩︎


One extra safety measure, if you're writing some dangerous unsafe code, is to add thorough tests to your crate and put it on The Rust Project takes backwards compatibility very seriously, so it is likely that, should the Rust's memory model and/or optimizations be changed, a breakage in your crate would be discovered during Crater runs. This gives you a possibility that at least people would open an issue for that bug in your repo, possibly even providing a fix. If you're doing something which looks reasonable and/or your crate is popular, you may even affect the development of the memory model.


I got curious. Essentially untested, may contain unsound ness, for exploration only: [playground]

mod imp {
    pub type RwLock<T> = std::sync::RwLock<T>;
    pub type RwLockReadGuard<'a, T> = std::sync::RwLockReadGuard<'a, T>;
    pub type RwLockWriteGuard<'a, T> = std::sync::RwLockWriteGuard<'a, T>;

mod imp {
    pub type RwLock<T> = core::cell::UnsafeCell<T>;
    pub type RwLockReadGuard<'a, T> = &'a T;
    pub type RwLockWriteGuard<'a, T> = &'a mut T;

type PhantomLocal<T> = PhantomData<core::ptr::NonNull<T>>;

pub struct Lock<T: ?Sized>(PhantomLocal<T>, imp::RwLock<T>);
pub struct Read<'a, T: ?Sized>(PhantomLocal<T>, imp::RwLockReadGuard<'a, T>);
pub struct Write<'a, T: ?Sized>(PhantomLocal<T>, imp::RwLockWriteGuard<'a, T>);

impl<T: ?Sized> Drop for Lock<T> {
    fn drop(&mut self) {}

impl<T: ?Sized> Drop for Read<'_, T> {
    fn drop(&mut self) {}

impl<T: ?Sized> Drop for Write<'_, T> {
    fn drop(&mut self) {}

impl<T: ?Sized> UnwindSafe for Lock<T> {}
impl<T: ?Sized> RefUnwindSafe for Lock<T> {}
unsafe impl<T: ?Sized + Send> Send for Lock<T> {}
unsafe impl<T: ?Sized + Send + Sync> Sync for Lock<T> {}

impl<T: ?Sized> UnwindSafe for Write<'_, T> {}
impl<T: ?Sized> RefUnwindSafe for Write<'_, T> {}
unsafe impl<T: ?Sized + Sync> Sync for Read<'_, T> {}
unsafe impl<T: ?Sized + Sync> Sync for Write<'_, T> {}

impl<T> Lock<T> {
    pub fn new(t: T) -> Self {
        Self(PhantomData, imp::RwLock::new(t))

impl<T: ?Sized> Lock<T> {
    #[cfg_attr(debug_assertions, track_caller)]
    pub unsafe fn read_unchecked(&self) -> Read<'_, T> {
        cfg_if! {
            if #[cfg(debug_assertions)] {
                let guard = unwrap_lock_result(self.1.try_read());
                Read(self.0, guard)
            } else {
                Read(self.0, &*self.1.get())

    #[cfg_attr(debug_assertions, track_caller)]
    pub unsafe fn write_unchecked(&self) -> Write<'_, T> {
        cfg_if! {
            if #[cfg(debug_assertions)] {
                let guard = unwrap_lock_result(self.1.try_write());
                Write(self.0, guard)
            } else {
                Write(self.0, &mut *self.1.get())

fn unwrap_lock_result<T>(result: std::sync::TryLockResult<T>) -> T {
    use std::sync::TryLockError::*;
    match result {
        Ok(ok) => ok,
        Err(Poisoned(ok)) => ok.into_inner(),
        Err(WouldBlock) => panic!("💥UNSOUND💥 unchecked lock access"),
1 Like

For some reason, I found the Tree Borrows much easier to read than the Stacked Borrows paper; the notation in the latter confused me quite a lot:

While I found the Tree Borrows model easier to understand when reading it step by step, it's still a lot of rules, as also visible in the diagram on this page (need to scroll down). But thanks for the link – even if I couldn't fully follow and/or remember everything – it made me understand better why we need these models and what they try to achieve (in general, and in regard to some particular cases).

I guess if I'm just doing a bit of FFI and things which "feel" reasonable to assume (and not like an edge/tricky case), I won't run into any problems.

If I ever feel the need to do some tricky stuff with unsafe for purposes of optimizing, I guess I should add proper tests. Maybe also test it with Stacked Borrows and Tree Borrows and/or add SAFETY comments in regard to why it's tricky and why it should be sound.

Question: How does std or big crates handle these issues? I have read that under Stacked Borrows some popular crates caused UB? Have these crates been updated, or is Stacked Borrows considered to be too strict in that matter and expected to be updated then (e.g. replaced with Tree Borrows)? Just out of curiosity, I'd like to understand how the process is here in general for affected crates. So far, I always have seen UB as a "no go". But if the rules for UB are more blurry/volatile/unclear, then it doesn't seem to be always clearly defined what's okay to do and what's "bad"? (Not that I expect to run into these cases soon, but I'm curious anyway.)

Another question: Wouldn't it be nice to have a normative, preliminary, conservative model (with respective documentation), i.e. one that is at least as strict as both Stacked Borrows and Tree Borrows, as long as these issues are still under debate, such that non-edge cases can rely on being sound now as well as in the future?

  1. The big two are how exactly UnsafeCell and !Unpin behave, especially w.r.t. being "infectious" and how strictly they're tracked. ↩︎

Std is in a special position, because as a first approximation, the people who make the language make std, so it kind of does whatever it wants, as long as it fits the compiler's current understanding of the language. This is in practice true even in the absence of a formal model.

Also, don't forget that std can contain bugs and unresolved issues, too! If you head over to the rust repo's issue tracker, you'll find probably tens of potential soundness issues or at least questions.

With respect to 3rd party crates, I'm not sure how relevant their size is. Big crates are not any more entitled to causing UB than small ones (no crate should ever contain UB).

Which leads to…

Definitely. However, it's a lot of work, and I'm not sure if it is even possible to be completely unambiguous. If the sets of rules (SB & TB) are themselves blurry, then so is their intersection.

However, we as Rust programmers can help. If we restrict our usage of unsafe to the absolutely bare minimum needed (eg., no unsafe outside FFI, or in no_std platforms, no unsafe outside FFI and implementing platform primitives), then we can avoid the 99.999% of soundness bugs.

Yes, I understand std can do things that third party crates can't. Still would be interesting to know whether there's code that's UB (and everyone is aware of it but it's no problem because in practice there won't currently be any optimization that cause crashes).


Well, as pointed out in this thread (I guess), it's not clearly defined which code is UB and which code is not, because in order to reason about what's UB and what's not, we need a memory model. And the memory model of Rust isn't well-defined yet.

If Tree Borrows is stricter in some regards than Stacked Borrows is, and if Miri currently uses Stacked Borrows, then switching to Tree Borrows in the future would make currently sound code being UB in the future. So when we talk about "a crate should never contain UB", which version of UB are we talking about? There seem to be several definitions and ongoing negotiations about which code is UB and which code is not.

Following @CAD97's advice above, the only thing I can (in theory) truly rely on is using unsafe APIs in std. Maybe I can also do some very basic FFI. But anything else is neither sound nor unsound by definition (because the memory model isn't defined) but rather "a matter of ongoing negotiations" (or in the best case be "believed to be sound under reasonable assumptions").

I was curious about how these "negotiations" work out in practice, and how are things documented in that matter (in the source code) when it comes to using unsafe for certain optimizations. Can I imagine something like:

// SAFETY: This is sound under Stacked Borrows, because blah blah.
// TODO: re-evaluate when switching to Tree Borrows!


Oh, SB and TB aren't well-defined (yet) either? I thought they are full/complete models?

Hmmm, this sounds a bit like unsafe Rust isn't really meant to be used. I understand that it's generally frowned upon to use unsafe Rust where it can be avoided, but I felt like "unsafe Rust" could be seen as a language that you can use when you want to.

Now I feel more like unsafe Rust is actually unstable (due to lack of clear/normative definition of what's sound and what's not).

P.S.: And if unsafe Rust has to be considered unstable, maybe it should be documented as being unstable (edit: in some regards, e.g. in corner-cases)?

I think this takeaway is too strong. Sure, the safety situation of some of these things are unclear, but there are also many things that are unambiguously okay. Just stick to those and you'll be fine.

I think it works out in practice in most cases. But I think from a formal p.o.v. a lot of these things (which are "unambiguously okay") aren't formally documented as being unabiguously okay.

Anyway, I understand that documenting these might not be high priority. It merely confused me because I had the wrong idea that everything is well-defined. And as I learned above, even in languages like C and C++, there seem to be open issues. So I guess I'll just be "careful" and try to not do "unreasonable" things (and if I do, I'd first follow-up on all those blog posts and proposed models and make sure I'm up-to date with the current debate and add corresponding comments and TODOs to my code).

Since this topic is about aliasing, let me give an example of a rule you can follow that definitely wont trigger any aliasing problems.

Given an allocation, you can turn it into a raw pointer with Box::into_raw, use it as a raw pointer, then convert it back with Box::from_raw later when you are done using the raw pointers. Between the call to Box::into_raw and Box::from_raw, the raw pointer you got from Box::into_raw will remain valid for the entire allocation. The same applies to any raw pointers directly derived from it (that is, without going through a reference).

Now, you still have aliasing rules when you turn the pointers into references, but since all such references must be derived from the Box::into_raw raw pointer, they can never invalidate the original raw pointer.

Following the aliasing rules for the references can be done with this rule:

Any time you turn the raw pointer into a reference, you must consider the allocation borrowed (mutably or immutably depending on the reference type) for the interval of time starting at the creation of the reference, and stopping when the reference (including anything derived from the reference) goes out of scope. Such intervals may never overlap with each other unless both are immutable.

Operations like ptr::write and ptr::read that access the allocation without a reference can be considered for the above rule by thinking of them as a super short immutable or mutable borrow, that must not overlap with other borrows in exactly the same way.

To give an example, the classic example of something that seems dangerous aliasing-wise is a linked list. However, if you follow the above rules, like in this example, then I claim that there is no danger that this will violate any aliasing rules in any potential model we might get.


Have you read the nomicon? I know it isn't 100% up to date in some places but it does a pretty good job of pointing out some of the danger zones.

I don't remember if it was the nomicon or somewhere else, but the advice I've seen given is to avoid mixing raw pointers and references as much as possible. If you're always maintaining a single raw pointer that has "permission" to the memory the differences in SB & TB's models don't matter as much because they're concerned with how permission moves between related pointers and references when accesses are interleaved.

This strategy is essentially what Alice is suggesting as well.

This just is not true wholesale. There is always a subset of unsafe usage that is unambiguous wrt soundness. And it's probably a lot bigger than you imagine.

Given all the 9's being thrown around in this thread, it gives the impression that the subset is nearly the whole set. But you're arguing that "anything other than basic FFI" is in the 0.0000001% or whatever. And that doesn't sound right at all.

The blurry parts are on the extreme fringe, not "absolutely everything outside of basic FFI." Integer-pointer casts are well within the extreme fringe!

1 Like