for all dynamic sized types (Vector, HashMap), ... declare them up front, reserve the right size, and never insert/push beyond the specified size.
this is more restrictive than a custom allocator: we are not only saying "we will never use more than X bytes", but we are saying for each object: this will never use more than X bytes
If so, is there any good guide on this style of programming ?
There is no style rule or even preferred style for this.
If performance is critical and you know the maximum size you will need then by all means specify the capacity of your containers. If neither of those apply then why bother? Doing so may have a negative impact on your memory usage and/or performance as you use excessive memory for everything that is not required most of the time.
The best programming style for anything depends on your requirements. Sometimes we just have to make decisions for ourselves rather delegate to some supposed common wisdom.
The XY problem is that I am doing gaming on wasm32, where I am limited to 4GB.
I'm not 100'% sure yet -- but it may be preferable to know upfront that I can only support X characters and Y projectiles -- and when the limit is hit, just not draw them, rather than Vec.push growing, and hitting OOM.
I agree that this alloc-up-front style looks like trash, but I think it has the benefit that on edge cases, it allows us to "don't draw everything" vs OOM/crash of wasm32.
I generally think of "style" and "practical" as different independent things. "orthogonal" as intellectuals like to say.
That stylish jacket one might like to ware may not be practical in a snow storm though.
So it goes when writing code. Writing clear, idiomatic code in whatever style, OOP, functional, etc, that look good may not be practical if it impacts performance or memory usage.
You question is hardly worth a style guide. At least not beyond what I said in my previous post. At the end of the day you have to do what works in your case.
Games have had these kinds of constraints for a long time, but it's not just due to memory limitations:
Preallocating, and never reallocating, allows the game engine to make assumptions about pointers never changing.
Many games fall into the domain of soft real time applications, and avoiding dynamic memory allocations is one of the strategies employed to be able to fulfill the real time constraints.
I also know there are high-assurance systems written in Ada that disallow dynamic memory allocations past the very early initialization stages.
So to answer your question: Yes, preallocating memory up-front is definitely a thing. I don't know of any particular guide, but generally speaking you're going to end up declaring a bunch of constants with your limits, and adjusting them as needed during development.
I think this is delving into a pedantic argument over the definition of words.
What I'm after here is a practical guide / list of best practices for this (real time gaming on 4GB) type of programming, so I do not have to rediscover them the slow way via trial / error.
Likely true. However it is necessary to find out what people mean when they use words, which may not be what we expect. As in this case, we have already moved on from your original question about style vs practicality of specifying Vectors with a known and fixed capacity to general advice about writing games in a (relatively) constrained system.
I have no experience of writing games for the web or using Rust/WASM. What kind of game do you have in mind?
Practically, for 4 GiB, I don't think you will actually benefit from focusing on the memory allocation. This is because most of your memory is not going to be occupied by the game entities, but assets — meshes, textures, sounds, etc. — that are loaded only once. (Many of these things may be offloaded to the GPU after startup, but not all of them.)
Yes, you should put limits on the number of game objects, but when choosing that upper bound, you're very likely to hit limits like “frame rate too slow” well before you hit the limit “out of memory”, and once you've limited the objects for any reason, that practically limits the memory usage too.
Pre-allocating everything is a valid strategy (and is commonly used in embedded applications which have extremely small memory limits), but I think that for your situation, it will be mostly a waste of effort to code in that style — 99% of the benefits will come from just putting size limits in your game logic. (Of course, you should still pre-allocate when it's easy: inventory: Vec::with_capacity(INVENTORY_SLOTS) and so on.)
(Of course, all this does depend on the kind of game you're writing. The above applies to lots of kinds of games, but not all.)
Thinking about it a bit.... I suspect that rediscovering them the slow way via trial and error may be the best way. Because what works best will depend on what actually goes on in your game, or any program for that matter. Also going through that trial and error is a learning exercise after which you will more easily choose good approaches for you next problems.
I also suspect that creating Vectors and such with a capacity will only save you from a few memory allocations as your usage grows. That may not even show up in the large scheme of things in your program.
Also, never use a linked list.
And there are good arguments for using a structure of arrays for your game objects rather than an array of structures. See "Entity Component Systems".
A ton of C code was written this way, to not have to deal with malloc/free. The end result was lots of programs that crashed when someone tried to run them on larger problems. Panicking for that in Rust is better than UB in C, but still not good.
Be smart about your Vecs -- don't reallocate them every frame, for example -- but use them. And any array that's a kilobyte or more is probably better as a Vec, even if you never end up resizing it.
Sure, you could say "16383 vehicles ought to be enough for anyone ", but that just leads to people making mods the the core of the game (Steam Workshop::More Vehicles) to get rid of the limits.
Sure you could always allocate space for exactly 262,144 trees every time you run, but that leads to mods (Steam Workshop::Unlimited Trees Mod v1.12) to change the limit again. (And maybe the moon map didn't need to allocate anywhere near that much memory for trees.)
Virtual memory might let you get away with "allocate a million elements just in case" for everything (if you're on 64-bit and thus have the address space), but we have good tools now and don't need to do that.
(Sometimes you might want something more like std::deque - cppreference.com to have non-amortized limits on certain operations and more address stability guarantees, though, rather than just a straight Vec.)
Now, if you really are working on a fixed-size problem -- airplane flight controls being a classic example -- the absolutely pre-allocate everything at exactly the right size, and never touch dynamic memory.