[WASM/NodeJS] Allocating a Vec<T> with large capacity causes RuntimeErrors

I have 2 crates, interpreter and ffi. The ffi crates depends on the interpreter crate.
The interpreter crate exposes an Interpreter type, which is used by the ffi crate to expose that API to the JavaScript world. This Interpreter type, when instantiated, creates a stack which is backed by a Vec<T> with a capacity of 16 MB.

I need to compile the 2 crates to WASM. Now the compilation itself is successful (the command used is wasm-pack build --dev --out-dir pkg --target nodejs), but when I try to load the output into the NodeJS repl, it gives an error:

> var x = require('./pkg');
undefined
> let i = new x.Interpreter()
Thrown:
RuntimeError: unreachable
    at rust_oom (wasm-function[3297]:25)
    at __rg_oom (wasm-function[3590]:5)
    at __rust_alloc_error_handler (wasm-function[3452]:5)
    at alloc::alloc::handle_alloc_error::h67f29d7cba135f71 (wasm-function[3589]:5)
    at <interpreter::eval::memory::Memory as core::default::Default>::default::h87cb4434eda26c4d (wasm-function[2272]:233)
    at interpreter::eval::interpreter::Interpreter::new::h6d6ae62e010770cf (wasm-function[152]:1176)
    at ffi::Interpreter::new::he342356666d86919 (wasm-function[83]:66)
    at interpreter_new (wasm-function[2494]:15)
    at new Interpreter (/home/j/dev/accept/ffi/pkg/ffi.js:165:24)

Here's the strangest part:

  1. This used to work fine (albeit before a refactoring of the interpreter crate, but even then the Vec-backed stack of capacity 16MB was present)
  2. When I reduce the size of the stack to < 11 MB, I can instantiate an Interpreter object and call its main method .eval(). However, even then if I call .eval() on it again, or try to create a 2nd Interpreter instance, I still get an OOM error.

Why is this the case? From my perspective I should be able to create a Vec<T> of capacity 3.2 GB and it should still succeed.
Also, what can I do about this? As-is this code can't go into production, I can't allow spurious OOM errors in there.

When you try this, is total memory usage of the process what you expect?

The process is the NodeJS REPL. It starts out consuming about 7MB of RAM, which drops off to about 4.8MB after a while.
When I load the module, create an Interpreter object and call its method, that goes up to about 80MB, which could be about right.
But even then 80MB is pretty much nothing compared to the GBs that the NodeJS runtime / WASM module should be able to handle before hitting OOMs.

Without access to the code, it's hard for us to debug this. What do the Interpreter and Memory types and their new/default methods look like? Have you stepped through the code in a debugger to verify exactly how much memory it's allocating?

Technically only 2 GB, since Vec can only handle a capacity of up to isize::MAX bytes. (But it sounds like this still shouldn't be a problem for your program.)

Without access to the code, it's hard for us to debug this.

Agreed. But in the meantime I've found the source of the issue, so I'm requesting this be closed.

Basically the issue was a size confusion in the codebase between number of stack frames vs number of bytes. Makes a rather big difference in the size of individual Interpreter objects.

The one place in the code base where newtypes aren't used, and now it's come back to bite us.

1 Like

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.