Getting started with Rust via Wasm

Hi there!

I’m just getting my feet wet with Rust, with the current goal being to supercharge web apps via webassembly. I still want javascript to handle the ui, so my plan is to use Rust more as like a library/engine rather than main- e.g. the game loop which gets input and returns output for rendering. Only, in order to get input - the library must return some hook to the JS side which can be called, as far as I can tell (i.e. no global mutable variables)

The way I see it starting up is something like this:

pub fn start_game(
) -> CallbackContainer { ... }

Then inside of start_game there would be local mutable variables like a camera, and CallbackContainer could be called via JS in order to update that.

This is probably a bit of a weird path, but anyway - I’m stuck with a couple starter questions:

  1. How to return a closure via wasm library (that “CallbackContainer” type is a placeholder)? I looked at some docs and just posted this question here on StackOverflow, then realised this community seems much more active :slight_smile:

  2. Is VSCode okay for Rust development? It feels super laggy - not sure if that’s the plugin or language server or what… I could switch to NeoVim which I miss a little bit - but for doing web stuff I’m a bit more comfortable in VSCode

Could you perhaps define that a little more clearly?
Are you talking about performance?
Or about essentially leveling up the development experience on the web?

I ask because right now, WASM isn’t going to be any faster than JS is, in any browser; I know that from experience.
I’m not even sure that the various engines actually JIT compile the WASM code rather than just interpreting the WASM bytecode.

The development experience on the other hand really is worlds better with Rust than with JS, so in that sense it’s really worthwhile.

As for your questions:

  1. I don’t know, and I’m not even sure that FFI-crossing closures are supported with WASM at the moment.
    Here is a list of types that can cross the Rust/WASM :left_right_arrow: JS boundary, and under what conditions.
  2. I don’t rightly know, as I don’t use VSCode. But in my experience, editing in webbased tech is doomed to failure for performance reasons. That includes anything built on Electron e.g. Atom, Slack (technically not webdev but the perf point stands) and indeed VSCode.
1 Like

Yes- performance. Use case is all about manipulating pure data like matrices and lists of primitive things, no DOM interaction on the rust side (and on the JS side, pretty much just WebGL calls)

Though I don’t see the Closure type there? Overall the docs feel like they’re a work-in-progress

I know you asked “how” but my answer would be to not do that. Rather than creating your state inside of start_game and returning a closure over those values, Box up a state object (of your choosing) then return a pointer to it via Box::into_raw().

On the JS side, that return value becomes a handle that you pass to every wasm function, which will load state from memory using that pointer, do its business, then mem::forget the state to prevent it from being dropped.

Here is an example that I’ve written:

1 Like

Well WASM support in general isn’t fully baked yet. The maintainers of the WASM part of the Rust ecosystem likely decided that it was better to push out initial support, so some things that will ultimately work aren’t supported yet. Support for closures crossing the FFI boundary is likely to be one of them, as it’s reasonably advanced but relatively few places actually need it (i.e. it’s relatively high cost, low yield).

And @Bees is right, you’re likely to be better off without the complication of FFI-crossing closures, both in terms of convenience and performance.

1 Like

Ahh - okay cool… I’ll have to play with that - for sure there’s no reason I need to actually call the function from the JS side, just need to make Rust call a function

My knee-jerk next-question is to ask if that value can be a pointer to a function, but I’m getting a little ahead - don’t know yet how boxing and into_raw and all that works yet…

Thanks for the sample code too!

Are you really talking about WebAssembly (=WASM)? These benchmarks prove you wrong on the first part part that WASN isn’t faster in any browser: [1], [2], [3]. About half of these benchmarks is faster in webassembly for me right now in Firefox 63.

Following blogpost shows that all major browsers do JIT for WASM: Firefox, Chrome, Safari and Edge. Moreover all of the articles say that all of the browsers do only JIT and no interpretation. (The V8 article says that interpreter would be about 20 times slower than unoptimized JIT.)

The benchmarks show that performance comparisons depend very heavily on the workload. Also there is always performance tax on crossing the JS-WASM boundary so performance i your case may tank because of that.

1 Like

While it’s true my code does some WASM/JS boundary crossing every now and again, it’s mostly used to get inputs and outputs across, and to get Date API info, which is necessary because there’s no other source of time on WASM, but it’s also nothing more than an f64.
Other than that the code pretty much runs in pure WASM, so I’d say this code base is reasonably reflective of how WASM should be leveraged in the real world (i.e. cross the JS/WASM boundary the least amount of times possible, and with the least amount of data possible).
Therefore I’d say that while to some extent performance will be based on what your code is doing (when isn’t it?), it’s also true that my experience is perfectly valid data for others until somebody can show me concretely how to do better than this.

Fwiw my goal isn’t necessarily faster number-crunching performance, but rather - being able to avoid GC pauses. Will Rust be better for that?

I’m assuming the Rust/Wasm layer itself will be better - but does the JS shim introduce too much GC to make it not worthwhile? Let’s imagine a gamedev scenario just a few things are being created short term - like controller and tick updates, plus a few long-term things being held on the js side like a gl context.

So far I see three options for managing state in a wasm app:

  1. Using Mutexes. Example:

  2. Using thread_local. Example:

  3. Using a raw pointer and telling Box not to free it (as @Bees) mentioned.

These are also discussed here with Alex Crichton chiming in, though it’s been a year since that discussion:

Seems like the Box+raw pointer approach is best?

Oh wow he also gave a great answer on the SO question linked above:

1 Like

@Bees - I tried a few different techniques, and ended up with yours so far… I have some issues with pointing to third party library stuff, but otherwise it works great.

Though I am concerned that I had a leak that went undetected… Is there a way to check if I forget to call mem::forget? I’m thinking maybe chrome devtools will show the increasing memory, but not sure…

Forgetting to mem::forget wouldn’t cause a leak, I don’t think; it would rather cause the Box’s destructor to run as it falls out of scope which would deallocate its contents, and the next time you tried to dereference it you might find it still there, or you might find garbage.

If you use the wee_alloc crate, there’s an option to cause messy aborts if you try to use memory after it’s freed. I’ve never used that feature, just read about it. It apparently is very expensive at run time, but should weed out the issue you’re worried about.

1 Like

I’ve also had decent luck just making a basic struct that holds an Rc<SomeStateData> and a couple of Closure instances (mostly for requestAnimationFrame/setInterval callbacks that I don’t want to get dropped), exporting that struct with the #[wasm_bindgen] annotation, and returning it to the JavaScript side from my entry point function. The JS code needs to stash it somewhere so it doesn’t get garbage collected prematurely, but you’d generally be doing that anyway.

One of my current projects uses that pattern. See the start_game function in the Rust source, which returns a copy of this struct so the JS code can work with it.

Or, for a reduced (and untested) example:

use std::cell::RefCell;
use std::rc::Rc;
use wasm_bindgen::prelude::*;

pub struct AppState {
    state: Rc<RefCell<SomeStateStruct>>,
    // Other stuff here...

impl AppState {
    pub fn new() -> AppState {

pub fn start_app() -> AppState {

The overhead is probably a bit higher than using a raw pointer, but I haven’t done any benchmark testing to confirm how much of a difference there is.

1 Like