I know that for performance concerns, a large continue heap memory is preferred to many smaller chunks of heap memory. Is this principle the same for stack memory in Rust? In my use case, there is a static array containing about 1600 slices. Those slices have different lengths but are around 2500. Performance is of a great concern here. Does the preference of having a large continue memory applies to stack memory too? An alternative approach to an array of slices is to have a single large array and then build a separate index array for visiting it. Any opinion about which approach might be better?
AFAIK. The biggest benefit of contiguous memory is when iterating. If the stuff you iterate over has (roughly) the same order in memory, then cache misses are avoided more often. So if linear iteration over all data in one particular order is a common operation for you, then the “one huge array” might more reliably have (slightly) better performance.
Also note that you are apparently talking about static data, which does not live on the stack, so calling it “stack memory” would be wrong.
Lots of individual allocations can still have the same performance as one contiguous one, if they happen to be all next to each other anyways. With dynamic allocations, this is something that often doesn’t happen, even if all the allocations were made directly subsequently, because allocators can give you back memory that was previously freed in lots of different places. On the other hand, static memory is “allocated” at compile time and never freed, so it will usually follow a much more predictable layout, since there are never any gaps to be re-filled from freed data. So I believe it’s more likely that your static slices can end up right next to each other in static memory, posing an argument that “this principle is not the same for static memory”. But don’t quote me on this, that’s just an argument from fundamental principles, and I haven’t tested how / in which order static data is actually arranged.
Regarding actual stack memory, the situation should be similar. Unlike for heap memory, different variables within the same function will reliably end up close to each other at run-time. And even between different functions, their stack frames will be next to each other.