The way I see it starting up is something like this:
Then inside of start_game there would be local mutable variables like a camera, and CallbackContainer could be called via JS in order to update that.
This is probably a bit of a weird path, but anyway - I’m stuck with a couple starter questions:
How to return a closure via wasm library (that “CallbackContainer” type is a placeholder)? I looked at some docs and just posted this question here on StackOverflow, then realised this community seems much more active
Is VSCode okay for Rust development? It feels super laggy - not sure if that’s the plugin or language server or what… I could switch to NeoVim which I miss a little bit - but for doing web stuff I’m a bit more comfortable in VSCode
Could you perhaps define that a little more clearly?
Are you talking about performance?
Or about essentially leveling up the development experience on the web?
I ask because right now, WASM isn’t going to be any faster than JS is, in any browser; I know that from experience.
I’m not even sure that the various engines actually JIT compile the WASM code rather than just interpreting the WASM bytecode.
The development experience on the other hand really is worlds better with Rust than with JS, so in that sense it’s really worthwhile.
As for your questions:
I don’t know, and I’m not even sure that FFI-crossing closures are supported with WASM at the moment. Here is a list of types that can cross the Rust/WASM JS boundary, and under what conditions.
I don’t rightly know, as I don’t use VSCode. But in my experience, editing in webbased tech is doomed to failure for performance reasons. That includes anything built on Electron e.g. Atom, Slack (technically not webdev but the perf point stands) and indeed VSCode.
I know you asked “how” but my answer would be to not do that. Rather than creating your state inside of start_game and returning a closure over those values, Box up a state object (of your choosing) then return a pointer to it via Box::into_raw().
On the JS side, that return value becomes a handle that you pass to every wasm function, which will load state from memory using that pointer, do its business, then mem::forget the state to prevent it from being dropped.
Well WASM support in general isn’t fully baked yet. The maintainers of the WASM part of the Rust ecosystem likely decided that it was better to push out initial support, so some things that will ultimately work aren’t supported yet. Support for closures crossing the FFI boundary is likely to be one of them, as it’s reasonably advanced but relatively few places actually need it (i.e. it’s relatively high cost, low yield).
And @Bees is right, you’re likely to be better off without the complication of FFI-crossing closures, both in terms of convenience and performance.
Are you really talking about WebAssembly (=WASM)? These benchmarks prove you wrong on the first part part that WASN isn’t faster in any browser: , , . About half of these benchmarks is faster in webassembly for me right now in Firefox 63.
Following blogpost shows that all major browsers do JIT for WASM: Firefox, Chrome, Safari and Edge. Moreover all of the articles say that all of the browsers do only JIT and no interpretation. (The V8 article says that interpreter would be about 20 times slower than unoptimized JIT.)
The benchmarks show that performance comparisons depend very heavily on the workload. Also there is always performance tax on crossing the JS-WASM boundary so performance i your case may tank because of that.
While it’s true my code does some WASM/JS boundary crossing every now and again, it’s mostly used to get inputs and outputs across, and to get Date API info, which is necessary because there’s no other source of time on WASM, but it’s also nothing more than an f64.
Other than that the code pretty much runs in pure WASM, so I’d say this code base is reasonably reflective of how WASM should be leveraged in the real world (i.e. cross the JS/WASM boundary the least amount of times possible, and with the least amount of data possible).
Therefore I’d say that while to some extent performance will be based on what your code is doing (when isn’t it?), it’s also true that my experience is perfectly valid data for others until somebody can show me concretely how to do better than this.
Fwiw my goal isn’t necessarily faster number-crunching performance, but rather - being able to avoid GC pauses. Will Rust be better for that?
I’m assuming the Rust/Wasm layer itself will be better - but does the JS shim introduce too much GC to make it not worthwhile? Let’s imagine a gamedev scenario just a few things are being created short term - like controller and tick updates, plus a few long-term things being held on the js side like a gl context.
Forgetting to mem::forget wouldn’t cause a leak, I don’t think; it would rather cause the Box’s destructor to run as it falls out of scope which would deallocate its contents, and the next time you tried to dereference it you might find it still there, or you might find garbage.
If you use the wee_alloc crate, there’s an option to cause messy aborts if you try to use memory after it’s freed. I’ve never used that feature, just read about it. It apparently is very expensive at run time, but should weed out the issue you’re worried about.