Choosing the 'best' representation for unsigned integers at compile time

I am a library author, sosecrets-rs is my crate.

I am writing a type pub struct RTSecret<T, const MEC: usize>(T, UnsafeCell<usize>); such that if self.1 > MEC, then the secret won't be exposed (either panic or return Err), else, it will be self.1 += 1 (see caveat below) and the exposed secret will be returned.

As a library author, I do not know what my user will choose for MEC, e.g. they might write something like let secret = RTSecret::<i32, 69>::new(70);, since MEC = const 69 in this case, I 'know' self.1 will never be larger than the literal 69 and I can optimize RTSecret by having its 1 field to be UnsafeCell<u8> instead of UnsafeCell<usize>.

Of course self.1 += 1 won't work since it is UnsafeCell<usize>, this is just to explain the context of my question.

I tried to accomplish this but no to avail. I wonder if it is even possible before const generics operations is stabilized.

fn main() {
    use std::cell::UnsafeCell;

    const fn bits_needed<const N: usize>() -> usize {
        let mut n = N;
        let mut count = 0;

        while n > 0 {
            n >>= 1;
            count += 1;


    const fn closest_unsigned_integer_type<const N: usize>() -> u8 {
        let bits = bits_needed::<N>();

        if bits <= 8 {
        } else if bits <= 16 {
        } else if bits <= 32 {
        } else {

    trait WhichRepresentation<const N: u8> {
        type Representation;

    impl WhichRepresentation<8> for () {
        type Representation = u8;

    impl WhichRepresentation<16> for () {
        type Representation = u16;

    impl WhichRepresentation<32> for () {
        type Representation = u32;

    impl WhichRepresentation<64> for () {
        type Representation = u64;

    trait ChooseRepresentation {
        type Representation;

    struct Choose<const N: usize> {}

    impl<const N: usize> Choose<N> {
        const N: usize = N;

    impl<const N: usize> ChooseRepresentation for Choose<N> {
        type Representation = <() as WhichRepresentation::<{
    // <() as ChooseRepresentation::<256>>::WhatSize
    const Q: usize = 69;
    // println!("{:?}",
    // <() as WhichRepresentation::<{<() as ChooseRepresentation::<{Q}>>::WhatSize}>>::Representation::MAX);
    // pub struct RTSecret<T, const MEC: usize>(T, UnsafeCell::<<() as WhichRepresentation::<{<() as ChooseRepresentation::<{MEC}>>::WhatSize}>>::Representation>);


I'll be interested to see if there are other answers but I think you would have to use a macro. My reasoning is that you are actually changing the type based on the logic so, as you say, you need the code to change at compile time.

Personally I think what I am hoping to achieve is impossible right now until feature = const_generics_exprs lands on stable.

1 Like

Would require generic_const_exprs I think. I'm curious why you decided to move away from typenum?

Personally, I wouldn't bother with this optimization, as I expect application to have like a handful of secrets and not millions.


Seems like without generic_const_exprs, what I want would be impossible.

I redesigned my tuple struct such that it now takes in a new type parameter, SIZE: typenum::Unsigned = typenum::consts::U0, so that user of this struct can specify whether or not they want a smaller sized type to represent the counter field, so instead of the default usize, they can opt to use u8 instead, in their program, at their compile time.


Updated Playground