Ever heard of Gleam?

I stumbled upon a language called Gleam.

It's a functional programming language from the devs of Elm (as far as I know), but for general purpose. It compiles to Erlang, and with the latest version also to JavaScript.

The compiler is written in Rust, the syntax looks like a mix of Elm and Rust and the compiler produces the same friendly error messages.

Maybe it's worth a look.

1 Like

It is too early to peek. I will wait for sometime.

Heya! I'm the creator of Gleam, thanks for checking it out.

I'm not one of the devs of Elm, but I am a user of both Elm and Rust. I'd be very happy to answer any questions you might have

Cheers,
Louis

2 Likes

Than this was a misinformation. Anyway an interesting language. I always wanted a language like Elm, but for general purpose.

@lpil : If you were creating gleam from scratch today, would you still target BEAM, or would you target wasm (like wasmcloud / lunatic) ?

I would like to also target wasm. In future runtimes like lunatic and wasmcloud could be good options.

The BEAM was selected as it is the most mature runtime with these characteristics and it has a large and active ecosystem behind it. It will be a very long time before the others can catch up here. Much like Clojure and other similar languages of the past one of the big benefits of Gleam over other similar languages is that it can draw from an established ecosystem with ease. There's lots of mature and efficient web servers, database clients, and so on.

I'm curious -- given the sheer # of C/C++/Rust libraries that compile to wasm, does BEAM have any advantage over wasm besides OTP ?

Unfortunately being able to compile to web assembly doesn’t mean that these projects can be leveraged as if they were written for the same language.

The experience would be closer to using C code from a scripting language such as JavaScript where bindings need to be written that deal with the different languages memory layouts, memory management, etc. This can already be done with the BEAM (like most runtimes) so there’s no particular advantage to web assembly here.

There is some ongoing work to make interop between web assembly targeting languages easier, but it will be sometime before that is ready and before there is a robust ecosystem build upon this.

Also, there is an Erlang to web assembly compiler and an Erlang VM that compiles to web assembly, so there are other options out there for Erlang and wasm.

Lastly many of the desirable properties of the Erlang virtual machine are made possible by it’s unusual use of memory and garbage collection. Typically native code is avoided because it undermines these properties with the different memory and failure handling models. If we were to build an Erlang like environment within web assembly we would still need to adhere to these properties still, and as such we wouldn’t be able to make use of C or C++ wasm code without having the same problems.

I hope in future wasm can offer more, but it takes a long time to catch up with a 35 year old ecosystem.

1 Like

Please correct me if I am wrong. In my limited understanding, the main selling point of BEAM is the per-"erlang-process" heap. Each erlang "process" has it's own heap, and can crash independently without taking down the rest of the system.

I believe what the teams over at wasmcloud / lunatic have shown is that wasm is sufficiently scalable that one can run "one wasm VM per lightweight process / green thread".

Each wasmVM can also crash independently, without taking down other wasm VM.

So then, at a high level, the runtime debate look slike:

Erlang: per-'process' heap
lunatic/wasmcloud: per-'process' wasm VM

I was looking at this problem earlier, and besides OTP, I was having a hard time convincing myself why BEAM > putting each lightweight process in it's own wasm VM.

1 Like

If wasmcloud implemented the same memory model, process efficiency, and garbage collection to other Erlang implementations that would be great! Once finished and an ecosystem has built around it it would make a great alternative to the BEAM.

I don’t believe the BEAM is a perfect implementation of these ideas and I hope in future something else will do even better in this area. Personally I’m excited about Lumen most of all.

It is great to see a future with more options available.

RE each process being a VM, one disadvantage is that the cost is higher. Many Erlang programs would need to be rewritten to compensate for this lower performance and higher memory usage. Another I’m unsure about is if from outside the wasm VM is there a way to know when the code inside is yielding to another process? Without this information building a cooperative scheduler is harder, which is one of the key features of Erlang that enables its soft real-time characteristics.

Looking at the wasmcloud website I’m not sure it has any distributed computation features? I also couldn’t spot links and monitors, which are required to implement Erlang’s incremental state shedding via supervision.

1 Like

I think we have different opinions on the cost of a wasm VM.

In my mind, the cost of a WasmVM is minimal: it's only (1) the amount of memory we allocate for it and (2) the cost of storing the wasm32 -> x86_64 JIT-ted code.

Two things: (1) I believe it is possible to write wasm32 blas routines that rival ntive blash routines, and (2) although Erlang can use NIF, I highly doubt pure Erlang blas routines can rival native blas routines. Therefore, for at least certain classes of tasks, I would argue that wasm32 has far higher performance than BEAM.

I believe the way BEAM handles this is: after X000 'reductions', the BEAM process is suspended, and the next BEAM process runs.

In wasmtime, there is something similar: Caller in wasmtime - Rust . I was asking about this exact issue in: Interrupt wasmer/wasmtime

I do not know enough about lunatic / wasmcloud to state their limitations. I will say this: I personally do not know of any "wasm32 VM per process" runtime that has anything remotely close to OTP for distributed computing.

====

I apologize if it appears I am pedantically nitpicking on the BEAM vs wasm32 VM issue. What drives my interest is that I am very curious if we can have a system where:

  1. lightweight 'processes' can crash independently, like in Erlang
  2. we can do OTP style distributed fault tolerant computation
  3. lightweight 'processes' can be as fast as native C
  4. we can program in a language with safety guarantees of Rust

Erlang itself "fails' 3 in that to get native C like performance, we need NIFs. Erlang/Elixir's lack of type system also does not satisfy 4.

Gleam seems to achieve 1, 2, 4. (Though I believe at the cost of hot code reloading).

Rust compiled to wasm32-unknown-unknown, with a "each 'process' has it's own wasm VM", I believe achieves 1, 3, 4.

If you agree with the above (and it might be the case you disagree, as I get the impression you believe wasm32 VMs have much higher overhead), one interesting question -- assuming 1, 2, 3, 4 are all desirable, would be whether it is easier to add (3) to gleam/BEAM or easier to add (2) to Rust/wasm32/{lunatic, wasmcloud}.

I personally don't know the answer. I suspect that the reason this has not been done implies there is some technical difficulty I am not aware of.

1 Like

Gleam doesn't sacrifice hot code loading but it also doesn't offer anything new there over Erlang. Everything you can do with Erlang for that can be done in Gleam with the same level of safety. In future we could offer more by typing upgrades, but I don't see it as very high value as it is a niche feature that is typically only useful to very constrained embedded environments.

I don't think the cost is overly high (last I saw the lucet runtime was very efficient and fast to start) but the cost was still higher than the BEAM. I'm about a year or two out of date with this so things have likely changed now.

Gleam could never compete with Rust in terms of performance, it doesn't offer the programmer the level of memory control. With a native or wasm backend with full program compilation could offer similar performance to languages such as Go or OCaml, which is something I wish to explore in future.

If we had this native/wasm backend we could leverage wasmcloud to provide an actor model, but that would be a small amount of time saved compared to the rest of building the backend, and then a custom natively compiled multi-threaded runtime would likely out perform a custom single threaded non-concurrent runtime compiled to wasm and then running inside the wasmcloud runtime to get concurrency and multi-threading.

In either case we still wouldn't have easy access to a large ecosystem to draw on. We could use C libraries but this would require wrapping in a similar way to how one has to wrap a C library to use from Python, JavaScript, or similar.

I am very interested in a native/wasm backend for Gleam, but I don't think the cost-benefit ratio is good enough yet. It would be years of work to get something that could rival the BEAM in terms of functionality and performance, and further years to get the ecosystem to a similar level.

3 is not possible, there is a cost of the memory semantics of languages like Gleam that doesn't exist for Rust. If you look at only these 4 factors there's not really any reason to use any languages besides C, C++, and Rust, but there are other things to consider.

Gleam aims to be a much easier to learn and use language than Rust, one that can be used productively by people of a wider range of skill levels. We sacrifice some performance for this, but that is OK. Rust is already there for people who wish to go the other route.

1 Like

I think I misunderstood the motivations of Gleam. I (incorrectly) thought we wanted the static guarantees of Rust with the dynamic benefits of Erlang (per-process heap/VM, processes crashing independently without taking the entire application down, spawn/link, OTP, etc ...) Thanks for the insightful discussion.

1 Like

Yes, those are two of the main goals of Gleam, you were correct. The general goals of the project can be found on the home page: https://gleam.run/

1 Like

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.