The Confessional Thread: Parts of Rust that I still don't get after all this time

Thank you for that.

For sure I need to read more of the Rust book.

Thing is I live in a world of "sequence, selection and iteration". That is statements, conditionals and loops. You know, the stuff that computers actually do. With a sprinkling of high level concepts like variables, structures and functions.

Basically, the world of C, Pascal, Coral, PL/M, Ada, even BASIC.

As such I find much of the conversation here about programming in Rust totally incomprehensible. Never mind the actual Rust language itself.

Ah well. Guess I just have some catching up to do.


I have to say, I'm surprised! You're quite the prolific poster, and while I know you've been working to learn more functional style idioms with iterators, it's interesting to see that one can make it this far while not yet feeling confident about how traits work. Though they aren't up and in the reader's face, traits certainly are still a crucial part of how iterators in Rust are designed; perhaps one thing you can try is to manually implement some iterators of your own design?

Seeing as we've already derailed and that this will probably be split out to a new thread, I have a confession to make too in that I am completely and utterly mystified by async.

I've been carefully avoiding all of the threads that have anything to do with async because any time I open one I see all of this terminology about streams and futures and executors and I no longer have any idea what I'm looking at anymore. (good thing @Alice is always there to answer these topics!)

I mean, I think I understand what async does; it takes imperative-looking code and transforms it into a state machine so that it can pause execution at await points and be resumed later. But I still feel like I don't understand the most fundamental use cases for this stuff. I know that a frequently cited use case is to avoid blocking on file IO... but how exactly does a state machine help with non-blocking IO? :man_shrugging:

And it's funny, because I sort of understand some of the less-frequently-cited use cases. A couple of days ago, I was trying to fix a bug in some event-loop/event-handler pair, and got frustrated with all of the state I had to remember between calls to the event handler. As I thought more about the problem, it reminded me of how annoying it can be to implement an Iterator in rust; especially compared to generators in python, which undergo code transformations into a state machine so that they can pause execution and⁠—wait a second

So I did a quick search, found the genawaiter crate, and before long I had converted my event handler from a FnMut into a generator⁠... my very first async fn. At this point I still have no idea what async itself is good for beyond it potentially being an implementation detail of generators! (and python had generators before it had async, so it must be good for something!)


Just to continue the part of Rust that I still don't get after all this time: stacked borrows. I've avoided unsafe code as much as possible because I don't quite get it — I used to think that I did, but this false understanding has been invalidated as I've watched various executable versions and subtle variations of the model go by in the news feed.

I've basically avoided writing unsafe code like the plague ever since.


For me, it would have to be lifetimes. I mean, I understand the basic principle, it's not that complicated: You can't try to use a variable/refer to a particular location in memory after it would logically be unavailable - which generally means either when it has gone out of scope (and thus probably been cleaned away) or moved.

The problem is, as soon as the compiler starts complaining about lifetimes not being declared quite right, I have very little idea how to fix it :confounded:


Procedural Macros. I use macro_rules a fair amount to remove repetition and condense code, but Procedural Macros seem like a steep learning curve for a limited payoff. Sometimes they are extremely useful (e.g. Serde), but I'm surprised how often they show up in third-party crates.


Now that you have summoned me, I'll try to at least give some sort of answer. If you had one thousand state machines, and each state machine had a function called poll that does a little bit of work and then returns, then you could put them in a vector, iterate through the vector, poll all of them, and suddenly you're doing one thousand things on a single thread concurrently (but not in parallel).

So the way non-blocking IO helps is that when your state machine needs to do some IO: It can start the operation and return from the poll function, and while the IO operation is pending, the loop can spend the time polling other futures.


@ExpHP does When we say async IO, what part is really "async"? help? I tried to give the high level "intuition" behind this (potentially parallelism-free) concurrency.

  • Also, shameless plug, you can try using next-gen; it's very similar to genawaiter; while it does not support resume args (yet, I'll need a motivating use case to go for it), it features more sugar and more fine-grained control over allocations.

Basically, the idea is that there is no behavioral difference between raw pointers and Rust references; the former need to follow the rules of the latter, even though these rules are no longer checked at compile time. In other words, Stacked Borrows is "just" a model where Rust's static borrowck rules also apply to raw pointers, even though these cannot be statically checked.

Collapsed for readability
  • Hence the usefulness of the Miri interpreter, that goes and checks these properties at runtime.

So, basically, when a raw pointer (i.e., *const T, *mut T or ptr::NonNull<T>, that I will write as *T since there is no difference between them, except for variance (and NonNull not being, well, NULL)) is created, it has a "provenance":

  • fn shared_read_only<T> (it: &'_ T) -> *T { it as *const T as _ }
    creates a pointer that up until the last time it is used, asserts the immutability of the pointee, in the same fashion that if the raw pointers was the shared reference it originates1 from.

1 Hence the term provenance.

  • fn unique_read_write<T> (it: &'_ mut T) -> *T { it as *mut T as _ }
    creates a pointer that up until the last time it is used, asserts the absence of other "valid"/usable pointers to the pointee. Hence the pointee is read-only between writes of this pointer, and the writes are data-race free.

  • fn aliased_read_write<T> (it: &'_ UnsafeCell<T>) -> *T { it.get() as _ }
    creates a pointer that does not assert either, it only asserts that whenever it is used there is not another pointer being used simultaneously / in parallel, so both the reads and writes are data-race free.

  • pointer "copies" are mainly (unchecked) reborrows, so the copied pointer inherits from its parent properties / assumptions, and in the case of the exclusive pointer, temporarily invalidates its parent pointer.
    There must be a way for an exclusive_read_write pointer to "decay" to an aliased_read_write (in other words, an exclusive_read_write pointer could have copies aliasing each other), but that part is still a little fuzzy in my head. That's why in general I prefer to use explicit-ly UnsafeCell-ed pointees.

The whole idea of reborrowing is what leads to a stacked model.

So, for instance, the following program is UB, according to Stacked Borrows:

let mut x = 42;
let at_x: *const i32 = &x; // shared_read_only
let _ = &mut x; // asserts/requires unique access to `x`, hence invalidates at_x
println!("{}", *at_x); // UB! use of invalid pointer

which can also be seen as:

let mut x = 42;
let at_x: *const i32 = &x; // 1. shared_read_only --------+
let _ = &mut x; // unique access? Not possible ---------->|
println!("{}", *at_x); // 2. used up until here <---------+

Which is the typical example of incompatible borrows in classic Rust. In other words, nobody is suprised that the following program does not compile:

let mut x = 42;
let at_x: &'_ i32 = &x; // 1. shared_read_only -----------+
let _ = &mut x; // unique access? Not possible ---------->|
println!("{}", *at_x); // 2. used up until here <---------+

This whole model was designed to deduce / obtain stronger non-aliasing guarantees for more aggressive compiler optimisations (in the examples above, that const-propagation could replace *at_x with 42, which is not possible to always do soundly if somebody gets an exclusive and thus mutable reference to that 42).

For instance, here is another example of UB:

let at_x_mut: &'static mut i32 = Box::leak(Box::new(42));
let at_x_raw: *mut i32 = &mut *at_x_mut; // exclusive_read_write
let do_stuff = unsafe {
    // Safety: x is never freed, so at_x_raw never dangles (it's a `&static mut i32`)
    move || *at_x_raw = 0
let at_x: &'_ i32 = &*at_x_mut; // reborrow (and thus usage) of the reference `at_x_raw` originates from, thus `at_x_raw` gets invalidated
let result = delta(
    do_stuff, // if called, invalidated `at_x_raw` gets used: UB

Indeed, here is the function delta:

fn delta (at_x: &'_ i32, do_stuff: impl FnOnce()) -> i32
    let prev_x = *at_x;
    *at_x - prev_x

Given that at_x is a shared reference to a type not having shared mutability (no UnsafeCell), at_x is a reference to immutable memory, thus the compiler is free to assume that *at_x never changes and thus optimize delta into:

fn delta (at_x: &'_ u8, do_stuff: impl FnOnce()) -> u8

thus getting result = 42 or result = 0 depending on whether this optimization happened: UB.

So Stacked Borrows is just saying that this program is UB for the same reasons that if at_x_raw had been a &'_ mut i32 reference, borrowck would not let that program compile.

The natural reaction at this point is:

what's the point of using raw pointers if we cannot escape the rules of Rust references?

To what there are two answers (not counting the motivation of this "stricter model" enabling more agressive optimizations):

1- Favor shared mutability (&UnsafeCell) to exclusive mutability (&mut)

  • to get shared_read_writes rather than exclusive_read_writes

If you do not like that these optimizations can kick in and make your code UB, and you wish to be able to use raw pointers C-style (and honestly, this cautious approach should be chosen by everybody to start with), just avoid using &mut and thus deriving raw pointers from it by using UnsafeCell instead.

Indeed, the previous example can be made sound with the following pattern:

use ::core::cell::Cell;

fn main ()
{ unsafe {
    let at_x_mut: &'static mut i32 = Box::leak(Box::new(42));
    let at_x_cell: &'static Cell<i32> = Cell::from_mut(at_x_mut);
    let at_x_raw: *mut i32 = &*at_x_cell as *const _ as _; // shared_read_write
    let do_stuff = unsafe {
        // Safety: x is never freed, so at_x_raw never dangles (it's a `&static Cell<i32>`)
        move || *at_x_raw = 0 // at_x_cell.set(0);
    let at_x: &'_ Cell<i32> = &*at_x_cell; // `at_x_raw` does not get invalidated since it does not require uniqueness
    let result = delta(
        do_stuff, // if called, invalidated `at_x_raw` gets used: UB

fn delta (at_x: &'_ Cell<i32>, do_stuff: impl FnOnce()) -> i32
    let prev_x = at_x.get();
    at_x.get() - prev_x // cannot be optimized because the pointee is not immutable!

2 - Raw pointers can be useful to circumvent borrowck overly conservative choices

  • And when dealing with uninitialized memory, they avoid asserting the validity of the pointee; see

Indeed, there are other coding patterns that Rust refuses to compile despite them being valid (mainly about a move of a pointer conservatively assuming that the pointee has been moved / dropped; this is thus related to unsafe code relying on Pin):

For instance, the following program fails to compile:

let boxed_x = Box::new(42);
let at_x: &'_ i32 = &*boxed_x;
let new_boxed_x = boxed_x; // "move" pointer from one place of the stack to another, may even be a no-op.
assert_eq!(*at_x, 42); // Error

This is where raw pointers are useful:

let boxed_x = Box::new(42);
let at_x: *const i32 = &*boxed_x;
let new_boxed_x = boxed_x;
assert_eq!(*at_x, 42); // Should be fine?

Now, this is not yet officially sound because Box itself asserts that it is not aliased either, much like a &mut, and the move of Box does count as a usage point that invalidates the borrow, so we are back to UB for the same reasons that the previous program did not compile. Note that this is a WIP, and this may could be changed. It is, for instance, the reason behind being theoretically unsound, and also behind Miri going crazy with self-referential structs, such as those compiler generated for an async fn's state / locals-that-survive-an-await-point (Miri and .await aren't best pals yet).

One way to make the previous code sound (and owning_ref too, for that matter), is to define one's own AliasedBox that does not assert uniqueness:

extern crate alloc;

use ::alloc::{
use ::core::{

struct AliasedBox<T : ?Sized> (
    /// Does not assert non-aliasing for the **inner** life of this AliasedBox.
    ptr::NonNull<T>, // covariance is fine because ownership

impl<T : ?Sized> From<Box<T>> for AliasedBox<T> {
    fn from (boxed: Box<T>) -> Self
        Self( Box::into_raw_nonnull(boxed) )

impl<T : ?Sized> Drop for AliasedBox<T> {
    // pointer must no longer be aliased at this point
    fn drop (self: &'_ mut AliasedBox<T>)
        unsafe { 

impl<T : ?Sized> AliasedBox<T> {
    // Not a From impl because of `Box` being fundamental.
    // pointer must no longer be aliased at this point
    fn into (self: AliasedBox<T>) -> Box<T>
        unsafe { 

// Aliasing pointers can only be used for reads for the lifetime of the deref
impl<...> Deref ... { type Target = T; ... }
// Aliasing pointers cannot be used for the lifetime of the deref_mut
impl<...> DerefMut ... { ... }

The mental model justifying this, (which, by the way, is what both Rc and Arc do, but with runtime checks), is that the exclusive_read_write pointer of the Box can be "mentally" downgraded to a shared_read_write, for AliasedBox, from which multiple aliasing pointers can exist, and at the end of life of the AliasedBox (be it to upgrade it back to Box or to drop it (by so doing)), the pointer does assert non-aliasing to upgrade back to the exclusive_read_write that Box needs).

And then:

let boxed_x: AliasedBox<i32> = Box::new(42).into();
let at_x: *const i32 = &*boxed_x;
let new_boxed_x = boxed_x; // does not drop, so no need for uniqueness
assert_eq!(*at_x, 42); // Is fine!!

Can we please not try to actually teach people stuff in the Confessional thread? Posting links to useful resources seems like a decent compromise, but I'd really rather just have the thread focus on getting stuff off your chest, both because it gives people a place to vent without being expected to put in a bunch of work to learn, and because trying to teach everyone everything they don't understand would produce an unnavigable mess of a thread (I'm fine with splitting off separate topics, but really, if someone wants to ask for help, they should probably just open a new thread instead of using this one).


I don't touch Higher-Ranked Trait Bounds, otherwise known as for<'a> syntax. I'm not sure I'll ever quite get that.


I have been using Rust for a couple of years. I have managed to build a simulator for our future product in somewhat less than 10K LOC without learning macros and almost never needing explicit lifetimes.

Macros look like they are written in an entirely different language. I like learning new languages, but people around here expect me to get actual work done. Fortunately, I haven't found a place where writing a macro would be a big improvement.

It seems to me that putting a reference into a struct is more contagious than COVID-19. Everything that struct touches needs explicit lifetimes until I've got more ' marks in my code than any other character. Not putting references in structs requires more clone operations than I would like, but that hasn't been a problem yet.


I am still incredibly confused by the following topics:

Undefined behavior

This is really the meat of my rant. Undefined behavior defies explanation, I think, by its very nature. At least the concept defies any attempt I make to internalize it. Being human, I want to believe that UB is fine as long as some-arbitrary-condition is met. I know this is untrue, but I still cannot come to terms with it.

Along with UB is the concept of unsoundness, which is (as far as I know) mostly unrelated but just as serious. Here's a cool cheatsheet!

Which segues into ...


Because unsafe allows innumerable ways to invoke undefined behavior, I too have learned to avoid it like the plague (very much the same as mentioned earlier by notriddle). I have even gone as far as making all of my crates #[forbid(unsafe_code)].

When I read articles like you can't turn off the borrow checker, the arguments made for unsafe do give me that warm-and-fuzzy feeling for a fleeting moment. Then I remind myself just how awful it is to write anything at all in C, and it scares me back to reality.

I don't think this is necessarily a problem, but I'm sure it will cause me to miss out on something important some day...


Reading about lifetimes makes my head hurt.


This is another tricky subject that is deceivingly simple! The devil is in the details. I can't even follow along with most conversations involving Pin beyond the very basics. Topics like Pin projection and unsoundness when combined with DerefMut blew my mind. There's just no other way to put it.

All the parts of Rust that I haven't found a need for in my day-to-day


I'm lucky enough that I understand most of the tricky parts of Rust other people stumble on. I guess it comes from an academic background and a decent intuition.

But the type-level magic that frunk accomplishes, I don't think I'll ever fully understand. There got some great blog posts about type-level recursion and the black arts powering frunk. I love reading these; it's a fun walk through (ab)using the type system to do awesome things.

But I don't think I'll ever be writing something as involved as frunk, diesel, or libp2p. I've written some absurdly generic code on a smaller scale, but the scale at which the above crates operate is an order of magnitude beyond where I can struggle though compiler errors on my own power.

Sure, I can probably wrap my head around it after the fact, but writing it is another story. (Let alone extending it in useful ways for writing generic APIs.)

I'll be sticking to my needlessly involved (proc) macros over needlessly involved trait impls, for the time being.

I thought up a clever qotw bait one liner to stick in here that prompted me to actually write it then forgot it while writing the post in favor of being genuine... whoops


Most of the stuff cited so far I "get" at least on a basic intuition level, but a lot of that is because I don't use Rust in my day job and encounter all of these concepts only via posts/articles designed to explain them in detail, rather than from the "this is why your code won't compile" direction that's typical on this forum. Plus, quite a few of them are familiar from other languages (e.g. I know how "variadic templates" work in C++, so "variadic generics" explains itself).

But there's one exception, which nobody else mentioned yet:


For those who've never even heard that term: I know it has something to do with whether, for example, in let x = 2; magic!(x); the macro is allowed to generate code messing with the same x variable, or whether it'll end up operating on some other name. I suppose it's like the macro-expansion-time equivalent of variable/name scope? Ish.


Implementing future Primitives

I've been watching the whole async-await thing for a while now, but still don't know how you'd go about creating the fundamental building blocks.

For example, something that'd be really cool is to write embedded code which can await until a certain input reaches a desired state. That'd massively improve the readability for code on microcontrollers, but I have no idea where I'd even begin with implementing it...


Couldn't have put it better myself.

I was just bitten by this when I thought Pin<Box<str>> would magically ensure my Box<str> doesn't change. However because of impl<T> Unpin for Box<T>, you can swap out the Box<str> and leave some unsafe code with dangling pointers.

Advanced Type-Level Shenanigans

I know Rust's type system is turing complete, but seeing crates like typenum or the previously mentioned frunk still blow my mind.

I remember reading through an article where the author implemented brainfuck at compile time and things like using traits to iterate over type-level zipper lists blew my mind.

Concurrent Data Structures

Writing concurrent data structures feels like black magic. How can you possibly reason about something when another bit of code might change pointers out from under you at any time?

1 Like

Stuff I completely don't understand despite multiple tries:

  • Higher ranked trait bounds (and kinds in general), GATs
  • Executors, tasks, and how they relate to futures
  • Object safety: when can a trait not be used for trait objects?
  • Pin (specifically how it actually works)
  • Default match bindings: I still have no idea why the compiler chooses to borrow sometimes and move other times.
  • The structure of Cargo.toml (appart from dependencies)
  • Drop order: when do things actually get dropped... so many deadlocks caused by messing this up. Usually I end up adding extra curly braces everywhere to make sure they are dropped.

Stuff I understand conceptually (I think), but never seem to work as expected when I use them:

  • Trait objects
  • Many of the iterator combinators (I can't seem to combine them properly except for map and filter)

Stuff I've never really tried:

  • Proc macros
  • async/await
  • Cow
  • Pin and its mystical-magical macro wrappers (all 8 of those crates).
1 Like

Each time I see someone ask about async and state machines, I'm always all like "omg!, omg!, omg!, I can explain this so clearly, because I've been working so much with libevent. See, you know those complicated state-machines you always have to implement in read and write callbacks? Well, with async you don't need to do that any more!".

... and that's when I realize most people haven't actually used libevent, so then I realize I have to explain that as well and then I realize I'm about to hold a seminar about libevent just to explain why async is so fantastic.

Hidden for readability

A few weeks ago I started writing a blog post titled something akin to "non-blocking I/O with libevent, state-machines and the async future". The whole point is to explain why one ends up writing (annoying) state machines with non-blocking I/O, and how async helps immensely. When time permits I'll finish that, but until then...

Say you have a reader callback function that is called each time you've got new data from the network. Your protocol is initially line-based, and the first line is a special command, and the following lines are parameters related to that command. And the command is terminated by an empty line (sort-of like HTTP). Your reader will need to read lines as it gets new data, return to the dispatcher if it needs more data and when it enters next time keep track of which line it is on (is it the first command line, or a parameter line?). Also, another fun thing one sometimes forgets is that the reader can be fed multiple lines at once, so one can't simply process a line and then return to the dispatcher (unless it's a level-triggered callback), so one needs to keep iterating over complete lines within the read callback. I've written plenty of these handlers over the years, and I still managed to make such a mistake just the other day.

And now let's say that after the line-based protocol you can get binary chunks, and each chunk can determine what type of binary chunk comes next -- or if it should return to the line based protocol again. You may even end up needing to keep a stack of states (for nested protocols).

State machines are beautiful tools for these kinds of things, but once you've done a few of them you realize how error-prone they can be when you make them manually and new people fiddle around in them without properly understanding all the state transitions. (Somewhat helps to implement states as functions and function pointers rather than enums.. But still not perfect).

Aaaaanyway, async helps in the sense that it builds those state machines you'll inevitably end up implementing anyways, but they do it implicitly. But while that's a big part of it, I think what people don't realize is that they also help protect against common traps like the one I mentioned above where people return too soon from a callback. That, to me at least, is almost as important -- because those types of problems can be hidden for a while.

Welp, sans examples that's essentially what I wanted to say in that blog post.


Never before have I read so much about something and literally not understood a single word of it. At this point I'm convinced that there's a protocol incompatibility between the learning center of my brain and the fundamental concept of GATs.


Since it seems multiple people have a hard time understanding GATs, I'll post a link to what helped me get them to click.

The RFC goes into the specifics about how it'd be designed, but the linked section explains what it is.

I personally find them fascinating though.


That is all far to "meta meta" for me. Hope I never have to read any code that uses such things.