In wgpu tutorials, I often see:
Option<NonZeroU32>
Option<NonZeroU64>
what is the advantage of this compared to plain u32 / u64 ?
In wgpu tutorials, I often see:
Option<NonZeroU32>
Option<NonZeroU64>
what is the advantage of this compared to plain u32 / u64 ?
Is there a specific tutorial you have in mind?
I would use the Option<NonZero<_>>
type only in some very limited circumstances, e.g. to specify through the type system that a value is optional but has to be nonzero. But even then it seems fishy because if zero isn't allowed, other values probably aren't allowed either.
For WGPU specifically I couldn't say, but the NonZero*
types have a niche (unused value, 0) that Option
can use for the enum discriminant tag; this allows it to be the same size instead of doubling the size.
Original question was not clear. Let me rephrase it as:
why does that not just use a plain u32 ?
It explicitly signals (and enforces) that 0 is an invalid value, and that the whole value can be absent (None
) rather than relying on the user to read the docs to know that 0 is treated as an "empty" value for this API.
Self documentation, essentially. Saying the number of mip levels or whatever is 0 is quite different than saying it's None
. It also means you don't need to think about whether "default" or whatever is 0 or -1.
Theoretically it should be something like enum Parameter { Default, Present(NonZeroU32) }
, but then you have to reimplement all the methods and it's less immediately obvious how to use it.