Is it safe to `mem::transmute` from `Option<NonZeroType`> to `ZeroType`?

I've been pondering lately about using Option in conjunction with non-nullable pointers instead of using raw pointers. Since enums like Option use null pointer optimization, i.e. they guarantee that Options of non-nullable type (such as &T and NonNull<T>) are the same size as a pointer.
I figure this means their in-memory representations necessarily have to use the 0 value for representing the None discriminant.
So it should be safe to just mem::transmute between Options and the raw/full-range type. Is that correct or am I missing something and this UB somehow?

For instance, in the stdlib there is the following method definition for NonNull<T>:

    /// Creates a new `NonNull` if `ptr` is non-null.
    #[stable(feature = "nonnull", since = "1.25.0")]
    pub fn new(ptr: *mut T) -> Option<Self> {
        if !ptr.is_null() {
            Some(NonNull { pointer: NonZero(ptr as _) })
        } else {

If I am correct, it should be completely safe (and potentially more efficient) to just use transmute:

    /// Creates a new `NonNull` if `ptr` is non-null.
    #[stable(feature = "nonnull", since = "1.25.0")]
    pub fn new(ptr: *mut T) -> Option<Self> {
        unsafe { mem::transmute(ptr) }
1 Like

It's probably correct, but transmute is a very dangerous tool to use, so if you can write your code without it, why would you use it?


NonNull<T> is marked #[repr(transparent], so it has the exact same in-memory representation and calling convention as *const T:

This isn't documented, so I'm not 100% certain how safe it is to rely on it, but I'm about 99% certain because I don't think there's any reason this property would ever be removed.

EDIT: Sorry, I mis-read the question. The information above is correct but doesn't really answer the actual question in this thread.


If you wish to rely on it, I encourage you to submit a PR to the documentation to add a note about it being guaranteed (see the ones about Option<&T>, for example), and then the team(s) will make a decision about whether they're willing to commit to it forever (instead of it just being a subject-to-change optimization detail).


I suppose it could potentially lead to more efficient code. In simple cases the compiler can apparently omit redundant checks for null such as in this case:

let ptr: Option<NonNull<_>> = NonNull::new(get_raw_pointer()); // checks if pointer is null
// ...
match ptr { // check may be omitted
    Some(ptr) => { /** ... */},
    None => { /** ... */},

However, when writing highly performant code I would probably not want to rely on that and just use raw pointers instead.
One example that does not get optimized (at least from what I've found) is the blanket implementation of Eq for Option<NonNull<T>>. This does two separate equality checks (one for the Option and one for the NonNull<T>). If we had specialization of generics, this could be optimized by comparing the in-memory representation of the type.

If you are referring to NonNull, then this particular property is already documented.

Perhaps I've misread you, but that does not seem correct to me.
Unless you meant Option<NonNull<T>> has the same in-memory representation as *const T (which it says in the documentation, BTW).

Sorry, I'm the one who mis-read. I was answering the wrong question, which is why my reply must seem confusing.