Wasm32, stack size, [[u32; 64]; 64]

According to [1] the default wasm stack size is 1MB.

Consider MapTile([[u32; 64]; 64]), a perfectly reasonable declaration.

16 MapTiles on the stack would cause an overflow.

The current options seem to be:

  1. increase the stack size (prefer to avoid this if possible)

  2. put MapTile on the heap, i.e. MapTile(Box<...>), but then things like Vec<MapTile>, instead of being a big block in memory, causes all types of fragmentation.

I am interested in hearing how people deal with 'big structs' in wasm.

[1] Make stack size configurable · Issue #479 · rustwasm/wasm-pack · GitHub

How about leaving MapTile as-is, and then you can continue using Vec<MapTile> without fragmentation? And only when you don't need a vector but just a single value would you then Box it explicitly.


I would start with asking whether a struct that's 4 pages of memory is actually "perfectly reasonable".

For example, one might just pick a different tile size, since the name implies that you're building whatever it is out of multiple tiles anyway. [[T; 16]; 16] might be nice, since then you can exactly represent the position inside the tile with just a u8 and no invalid values.


FWIW, wasm memory pages are 64Ki in size, so it would be a quarter of a page. Point still stands though.

Yeah, by "page" I was referring to the classic linux getconf PAGESIZE. But modern pages can definitely be all over the place -- even "huge pages".

1 Like

We encountered this exact problem at work. For context, my company is using WebAssembly as a way to make portable ML applications[1], and one thing we work with all the time is massive tensors.

Originally, I started storing these as arrays on the stack (e.g. [[[[u8; 3]; 244]; 244]; 1] for a u8[1, 244, 244, 3] image tensor), but we started running into "spurious" segfaults which I later tracked down to stack overflows (thankfully, wasmer sets up its memory so we segfault instead of clobbering the next thing in memory).

The initial hack fix was to set rustflags in .cargo/config.toml so we used a larger stack by default, but obviously we ran into the issue again as we started building more complex pipelines that used larger tensors.

My solution was to move all our tensors to the heap, but another solution I want to investigate is storing the buffers in a static variable when they are known and have a fixed size.

This gives us the benefit of a) guaranteeing that the memory use will be bounded with minimal duplication (no need to re-allocate and initialize each tensor's buffer on each run), and b) allowing static analysis to better predict how much memory a particular WebAssembly module will require at runtime. That last bit is particularly useful because we want to run on microcontrollers.

  1. <shameless-plug>
    The ML world is really fragmented at the moment, with half a dozen production-grade frameworks that each have their own libraries reimplemented for each language and runtime environment.
    Our goal is to improve developer experience and reduce integration time by "containerising" the ML part of your application.
    Flick me a DM if any of those buzzwords sound interesting to you. We're hiring :slightly_smiling_face:
    </shameless-plug> ↩︎


This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.