What is a reason for using web-sys except for team has no javascript skills?

I'm reading about wasm-bindgen & web-sys which effectively moves all web apis (as defined in WebIDL) from javascript to rust side.

Since there are many options how to implement rust+wasm <-> js interop, i'm trying to find an optimal way & approach.

I am wondering about:

  1. is there any reasons for using web-sys other than dev not being familiar enough with js?
  2. are there any penalties in doing so? (performance wise or other)
  3. is there anything wrong with keeping things separated: rust for intensive calculus & js for DOM manipulation/orchestration (with preferably communicating through some memory/ArrayBuffer of some kind)?

PS. Obviously, i'm coming with assumption that rust-wasm would be used only when heavy calculus makes a difference between rust and js implementations.

PS2. I hope i'm not missing something obvious.

Well in a sense, yes you could just use it if you were familiar enough with JS, but interop is usually unsafe-r than using standard safe code, and writing safe code makes for less buggy programs, and better times debugging the few problems that may arise. Not to mention that having the ability to simply import a crate like this:

use a_ported_crate::{some, of, its, functions};

Is a lot nicer compared to

mod a_js_crate {
    pub extern fn some(array: *const ()) -> u64;
    pub extern fn of(_: *const (), num: u16) -> u8; //to be interpreted as a bool
    pub extern fn its() -> *const AReallyComplicatedStructure;
    pub extern unsafe fn functions() -> *const fn() -> ();
}

Really, what I think think they're trying to do is minimize unsafe code, debugging nightmares and boilerplate ffi like above.

What i’m trying to understand is why would i want rust calling js functions at all (in a perfect world), let alone moving whole DOM (and other API) references to rust side.

If we start by assumption that js is a perfect fit for in-browser orchestration&dom/APIs manipulations and wasm being perfect fit for CPU intensive operations / heavy calculus then i’m not sure what is the motivation for this kind of transfer-all-to-wasm-side approach?

I am not saying anything is wrong with it but i am trying to see the benefits (other than non-js devs could use WebAPI) of it.

Also, i am trying to see if its something thats convenient for rust-versed devs, does it come with any cost like performance, overhead of some kind, etc?

PS.
My assumptions might very well be completely wrong...i have no problem learning thats the case, i’d just like to hear some proof for it

There are many things that Webassembly can't do, e.g. accessing (and therefore manipulating) the DOM, so that's not an option to do in pure rust. You still need Javascript to perform that kind of tasks (PCMIIW).

Well, in my understanding, that's exactly what wasm-bindgen combined to web-sys does: it provides access to all WebAPIs that are usually available to javascript. They do so by implementing WebIDL.

I'm pretty sure i got that right but anyone, please correct me if i'm wrong.

We are pleased to announce the first release of the web-sys crate! It provides raw bindings to all the Web’s APIs: everything from DOM manipulation to WebGL to Web Audio to timers to fetch and more!

ref: Announcing the web-sys crate! | Rust and WebAssembly

... by currently calling out to JavaScript. Once WebAssembly gains the ability to call these APIs directly, ("the host bindings proposal"), it will use that.

Which leads us to why you would use web-sys. For now, there's not a super compelling reason. But in the future, it will be faster than calling out to JS. And, if you start using it now, you will just transparently get faster once wasm gains that capability.

2 Likes

@steveklabnik right right, currently by calling out js but the goal is to have all WebIDL defined APIs called directly.

So, at the end of the day, that would be the moment of rust becoming pretty much complete (and faster) alternative to js in browser when it comes to anything WebIDL defined.

Am i correct saying that?

You can't tell that for sure by now, because it hasn't been implemented yet. You can't compoare the performance of something that doesn't exist yet :wink: In theory yes, but also calling JS from WASM has been improved massively by Firefox lately Calls between JavaScript and WebAssembly are finally fast 🎉 - Mozilla Hacks - the Web developer blog

@hellow
that's why i didn't say "for sure" but i used "would be" phrasing :wink:
so, "would be, if their promises come true".

i am not biased to any option...i am just trying to understand different approaches intentions and reasons behind them correctly.

also, i am aware of the link you've provided and wasm>js calls performance improvements but at the moment i'm specifically interested in use case(s) for wasm-bindgen & web-sys.

I can't resist observing that "knowing anything about javascript" and "wanting to ever use it again" are not necessarily the same thing :slight_smile:

1 Like

OT (Anyone know of any Rust dev experimenting with anyref.)

When host bindings become stable, they will be usable in raw code just as much as with wasm-bindgen. The choice will remain whether to stick low level or use framework.

The argument that your not using unsafe seems to get offset by the ugliness that #[wasm_bindgen] brings.

I personally think it does not matter that much either way. (All dependent on task at hand.) Most likely the performant part of code isn't in dealing with what you typically use that web-sys exposes.

One of the big issues is libraries. I think it is still open problem of what is best way forward, or even if many will be needed/written that require the extra functionality of the web.