Browser (rust/wasm32) and server (rust) as a single app

  1. I agree there are many cases where borrow, Rc, type system, versioning ... will prevent this from working. However, for this post, let's focus optimistically on what subset can work rather than which cases can't work.

  2. My understanding of Haskell/Haste ( ) is something like this:

2.1 We write a single Haskell codebase. Some functions are in Server monad. Some functions are in Client monad.
2.2 The codebase is compiled twice. The client can call server side functions (and AJAX Get requests automatically taken care of).
2.3 Everything is type checked by Haskell type system. Instead of two separate codebases that talk over some API, se think of it as a "single app" (with main function in Client monad).

  1. Question: How close can we get to this model in Rust? I would like to have a single app, compile it twice (wasm32, x86), have it generated client + server code, and have all communication (GET/Post over AJAX) to be type safe and automatically generated on the fly.

  2. The closest I have found so far is

  3. What suggestions do people have?


  1. The "Hello World" of this is to have a single shared counter with
impl Counter {
  fn get(&self) -> i32 ;
  fn increment(&mut self);
  fn decrement(&mut self);

Then, the client can interact with the counter as if it's calling a local function (that returns a future). All ajax/get/communication is handled by macros.

For this example, the server does NOT need to notify all clients whenever inc/dec happens (for simplicity, we can worry about updates later).


About the concept you're describing;
(Almost) every Service/RPC library already works like that. A service trait defines functionality and provides a future that resolves with the service result. This trait is the common interface between client and server. It's already transport-agnostic because the returned (boxed) future could be an instance of an RPC resolver or a 'from memory' resolver. Except for the wire format decoding, everything (all application logic) is already type-checked statically. (Encoding into wire format is not relevant here)

About what you want to achieve;
One could assume decoding is a statically type-checked process, but that's practically transforming a byte buffer into a struct type without validity checks at runtime. You cannot 'not dynamically test' because wire-encoding is a transformation with loss of information. Without testing type constraints this will simply lead to crashes and/or memory exposure.
So your idea brings nothing new, it just ignores actual characteristics of implementation details.


@Bert-Proesmans : I'm not sure if we are discussing the same issue.

I am not trying to present a new idea.

I a mwondering if there is something similar to Haskell/Haste.App, but for Rust. [The rest of my post is describing how Haste.App works, for those who may not have used it.]


I don't actually understand the issues you have raised -- because surely Haskell/Haste.App runs into the same issues, yet they have somehow managed to develop a solution that works in practice.

You're right. I didn't address your questions exactly, more like partly answering questions I formulated myself after thinking about your post. My apologies.

I checked out Haste and sample projects in the meantime and I don't think I've seen this before inside the Rust ecosystem. Correct me if I'm wrong, but Haste apps look a lot like metaprogramming; The compilation artifacts are derived from the recipe, which happens to be a Haskell script file.

This is a tough one. Haste is a compiler on itself, but I'm unsure what and how it exactly creates the artifacts. At a minimum, you'd need | macro's or script parser (build file) | and clever abstractions to get similar results. Features could gate items to only compile in either situation.

So the haste compilation step introduces a communication interface that is also injected into the client<->server calls.
Deriving that interface is an easy step, but injecting it into the program logic to replace procedure calls sounds like an obstacle. This is where conditional compilation (feature gates) and clever abstractions come in again. Additional design issues arise because of Rust's explicitness about error handling.

I have not yet build anything Rust related that compiles to web assembly or javascript. I'm not sure how this would work other than creating a custom compiler to filter and reinterpret frontend program logic into frontend behavior. There is also some webassembly boilerplate to adhere to.

Any framework that serializes structs into byte streams can work as a common layer.

I'm interested as well what people might come up with to enable this way of programming.

1 Like

I do not know what you mean by "metaprogramming" here. To me, "metaprogramming" means macros -- lisp or rust style macros. Afaik, that is not what is going on with Haste. Haskell's "metaprogramming", "template haskell" tends to not be used as much.

Haste, however, is a separate compiler from GHC. In my understanding of Haste:

  1. We write a Haskell program. Some functions have type signature in Server Monad. Other functions have type signature on Client Monad. I think we can do something similar here in Rust -- one crate, some with different #cfg directives on different functions.

  2. Haste then compiles the app twice. Generating an x86 binary (compiling the shared code + Se4rver Monad) and app.js (compiling the shared code + Client Monad). This step also we can do in Rust via "cargo build" and "cargo web start ...."

  3. Now the magic here (I don't know the details of this very well) is that if we are in Haste/JS, and we call a function in on the Server side, then Haste auto generates code that: serializes the args, makes the ajax call, returns a future, and deserializes it.

I think this too can be done in Rust, with some macro black magic. For the simplest example.


As one concrete example of how this could simplify Rust full stack development. Ignore security for a moment, and consider the following function:

pub struct FooStruct {};
pub struct BarStruct {};

pub fn my_read(file_name: String, bar: BarStruct) -> FooStruct {

Ignoring security concerns for a moment, right now, to allow the client to call this function on the server, we have to:

  1. client side: serialize the arguments to my_read
  2. client side: make an ajax call
  3. server side: register some POST handler
  4. server side: deserialize, make call, serialize result
  5. client side: deserialize result

On the other hand, it seems, assuming FooStruct and BarStruct impl Serde, that all this can be auto generated by some macro.

This macro, when compiled on target=x86, sets up some global http POST handler, which deserializes the args, calls the functions, and serializes the results.

This macro, when compiled on target=wasm32, generates the AJAX client calls.

Now, as as for type safety; the code generated by the macro may involve lots of unwraps (since any de-serialization step is going to return a Option), but the macro is written once -- and all the rust code written by the user should be type safe.

This is certainly doable;

  • Macro's can be used to provide boilerplate serialization/deserialization
  • You can enable/disable syntax items by constraining them to the target build platform

The fact that you derive two different output binaries from the same code remains the big roadblock. I'm not sure how this would be possible without a custom generator frontend that builds explicit code for both targets which are then provided to the Rust compiler. That would be the only way to keep the recipe source clean and readable.
Generating different main entry points for both binaries is something I have not looked into, so i'm unsure how tough that would be.

To conclude;
As lib dev, you'll need to provide a lot of macro's for the implementors to use. This could become ugly, so you'll need to think hard about the usability vs readability/difficulty tradeoffs.
The alternative approach would be mapping Haste 1-to-1 and provide an alternative frontend for the Rust compiler that can transpile recipe source into Rust source for the chosen target, which is then passed to the Rust compiler.

Aside: I feel like I stomped on the subject, which was certainly not my intention. Truth be told I'm having a hard time imagining myself using this approach during development because of the tight coupling between front- and backend.
I hope someone else comes in and approaches the subject differently than I did.