Suppose our Rust crate has a function
foo(i32) -> i32.
On x86_64, we can easily write wasm32 code that calls
foo (using it as an 'import'). After the wasm32 is JIT-ted, I believe the 'overhead' of this "dynamically generated wasm32 calling base Rust function" is just the overhead of a function call.
On Rust/wasm32, we can generate wasm32 code on the fly. We can then load this wasm32 in browser/Chrome. However, I believe we can only provide only JS functions as 'imports' ? As a result, is the 'overhead' of calling a base Rust function something like:
- newly generated wasm32 tries to call foo
- packages arguments into a JSValue
- stops wasm32 runtime, runs JS code on JS interpreter
- JS interpreter sees call to Rust/wasm32/foo, sends data over
- (precompiled) Rust/wasm32 is invoked
- Rust/wasm32/foo runs
- reverse above steps to send data back to the dynamically generated wasm32
Basically, instead of "just a function call (as on x86_64)" is the overhead something now like: serialize to JS, swap wasm32 runtime for JS runtime, invoke JS function, swap JS runtime for other wasm32 runtime, ... ? [This seems significantly heavier weight.]
Practically, why does this matter? Rust, among other things, has a great HashMap. Something that is not trivial to re-implement in wasm32. On wasm32/x86_64, where the overhead is just a function call, I don't mind just 'stealing' Rust's HashMap. However, on wasm32/Browser, if function calls has such high overhead, invoking Rust's HashMap may not make sense.