About one possible stack optimization

In code like this is it a good idea for Rustc to have an optimization that allows it to allocate only n u32 values on the stack?

fn foo(n: usize) {
    const MAX: usize = 1_000;
    let mut a = &mut [0; MAX][.. n];
    println!("{:?}", a);

fn main() {}

I've found myself in the unfortunate position of debugging erratic behavior that only occurs in the release target. I've identified some bad values, and now I'm digging through registers and memory offsets trying to track down where they came from. The difficulty and tedium of this task is aggravating and discouraging for me.

I can't imagine how difficult this task would become if the stack frame resized itself. I imagine the buffer changing sizes might push the other local variables around on the stack frame. I consider this a con.

I don't quite know how this could be implemented in machine code. If the other local variables are required to be at the beginning of the stack frame, then my concern holds no water. Though, in this case you could only optimize for one buffer and no more.

It would basically be a variable-length array, which isn't that exotic.

But its maximum size is bounded, and there's no need for new syntax, if the size optimization is guaranteed (and performed by the front-end) even in debug builds.

IMO this use case would be best addressed by on-stack dynamically sized types.

Probably the best way of handling this would be the same way Ada does that uses a secondary stack.

This allows you to return variably sized data (on the secondary stack.)

For example suppose you have a variable length array type.

   type Element_Array is array (Integer) of Element_T;

then you could return these with a function

  function Vary (N : Integer) return Element_Array is ((1 .. N) => 0);

Basically you want alloca in Rust, which is covered here: https://github.com/rust-lang/rfcs/issues/618

Isn't it up to LLVM to notice not all of alloca [1000 x i32] ends up being used?