Know stack size at compile/run time

Hi all.

I started a toy project which goal is to provide a no_std compatible Vec-like structure that lives only in the stack. Similar to what crates like arrayvec or const-arrayvec do.

The only thing I wanted to abstract from the lib consumer was the re-allocation when the length of the "vec" is going to exceed the capacity of the inner array. So basically, the idea was that

When push() is called => if len == capacity -> Allocate a new array with capacity = prev_capacity << 1 && copy_from_slice the previous contents into the new array instance

The problems that I found were:

  • Since everything is stored in the stack, it's not only important the length of the internal array of our structure but also the size of the T that it contains.
    So the re-allocations will need to consider core::mem::sizeof::<T>() and the capacity to make sure that re-allocating will not cause a stack overflow.
  • Stack sizes depend on the CPU model, not only the architecture. And I'm not sure wether it's possible to get the Stack size avaliable to sort of limit the maximum reallocation capacity that you might have either at compile or runtime.

I need a way to figure out the stack-size or the stack-mem left of the machine where the code is running to be able to make sure I never allow a stack overflow to happen. Does anyone know if there's any way to check for that? And if there is, if there's any better way to avoid the stack-overflow problem?

Thanks!

IIUC, it also depends on the specific operating system.

I'm not aware about OS influences on the stack-size. But I assume it will also affect to the available size as you correctly said.

I'm wondering how you're planning to reallocate a stack array. AFAIK there's no way to reallocate a stack array, increasing its size.

2 Likes

It's a really good question since there's no way to make it as far as I've seen.

I've been looking for info on whether Rust allocates memory for all of the possible enum variants that might exist. If it does, I might need to switch to a different project TBH since I'm almost out of ideas. But if it doesn't, my idea was to do something like:

#[derive(Debug, Copy, Clone)]
pub enum OuterStackVec<T> {
    Cap4(StackVec<T, 16usize>),
    Cap5(StackVec<T, 32usize>),
    Cap6(StackVec<T, 64usize>),
    Cap7(StackVec<T, 128usize>),
    Cap8(StackVec<T, 256usize>),
    Cap9(StackVec<T, 521usize>),
    Cap10(StackVec<T, 1024usize>),
    Cap11(StackVec<T, 2048usize>),
    Cap12(StackVec<T, 4096usize>),
    Cap13(StackVec<T, 8192usize>),
    Cap14(StackVec<T, 32768usize>),
    Cap15(StackVec<T, 65536usize>),
    Cap16(StackVec<T, 131072usize>),
}

#[derive(Debug, Copy, Clone)]
pub struct StackVec<T, const N: usize> {
    /// Current lenght of filled arr positions.
    len: usize,
    /// Container where T's are stored.
    arr: [T; N],
}

It does

Any enum value consumes as much memory as the largest variant for its corresponding enum type, as well as the size needed to store a discriminant.

From the reference

It absolutely does – how else would it be possible to represent an enum under all possible (runtime) circumstances otherwise?

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.