Exist a way for upgrade code on fly?

hi

I migrated to Rust-Tokio world from Erlang/Beam
in erlang we can upgrade code without restart system.

do possible i write a simple component in rust-tokio that
take rust codes and compile with cargo and build a dynamic library
and that component replace old module with new module ??

Hot reloading usually requires a runtime of some sort to dynamically load a module and safely handle ABI changes. You can look into how mun handles hot reloading for some examples of how challenging it can be: Hot Reloading Structs - The Mun Programming Language (mun-lang.org) (But note that mun is fairly incomplete, so its hot reloading support is not representative of what one might expect.)

1 Like

Erlang is the outlier here. Very few languages support hot reloading. Rust is not among them. You'd have to implement the functionality yourself.

8 Likes

Hot code reloading is a feature of the BEAM runtime. You can build your runtime which supports it, would be a hard task though.

1 Like

This is one of the advantages of a uni-typed (commonly "dynamic typed") language. When there's only one type that bindings can have, upgrades are much easier as things like size_of for locals or vector elements become irrelevant.

(There is, of course, cost to this flexibility.)

2 Likes

Amazingly it has been done in Rust though. The Theseus operating system by Kevin Boos is comprised of many small parts, he calls 'cells', that are hot swappable. PhD Defense -- Theseus: Rethinking OS Structure and State Management - YouTube

8 Likes

Mun is great ,
Is It support all rust libraries or not ??!

If your application is not performance critical I'd rather suggest you to restructure your application as separate microservices that may be deployed independently.

2 Likes

Unfortunately, as far as I'm aware, this requires defining a C FFI layer that the plug in can use to communicate with the Tokio runtime.

In general, Tokio does not work across plug-ins because each plug in ends up with a separate copy of the Tokio global variables, which breaks stuff.

2 Likes

actually our team not good at Kubernate
and other tools for implement , and yes our application is very performance critical this is our reason we migrated from Erlang to Rust+Tokio

yes. but we have time for work on it,
do you have idea for implementing hot-code reloading ??

after erlang, first we choiced JVM (Scala+Akka),
it performance is crazy after warm up but,
we need upgrade many times per week,
and warmup time is painful,

we choice rust+Tokio, its Amazing.
first we tried to run new-version of server then
shutdown old-version.
its good but our server is very high load,
if exist a way, that make our happy

what you thing about mun programming language ??

it is embedded in rust and syntax is like rust

https://docs.mun-lang.org/ch00-00-introduction.html

  • Ahead of time compilation

  • Statically typed

  • First class hot-reloading

but i don't know support all libraries or not
and also it is not production ready :expressionless:

Like I said, mun is very incomplete. It is designed as an embeddable scripting language (like lua) and any syntactic similarities it shares with Rust as just skin-deep. mun doesn't even support arrays right now.

You might be able to get some sort of hot reloading working if you start with a WASM runtime. At least then you will be able to use mostly-normal Rust with it.

1 Like

Making hot-code reloading work with Rust is not something you just do, unfortunately. You should strongly consider dropping the requirement and looking for another solution.

3 Likes

You can consider a linux server as a service which supports hot code reload. If it's a TCP based server you can let the systemd manage the server socket and pass it between server processes without downtime.

https://www.freedesktop.org/software/systemd/man/systemd.socket.html

7 Likes

Without any knowledge of what your server does it sounds to me that then first thing to do is address that very high load issue.

Perhaps it is feasible to split the work over two of more servers each then having enough capacity to "breath" as it were.

With that in place it may then become possible to shut down and restart one node at a time with the new software version whilst keeping the service available. You also gain a degree of tolerance to server failure.

6 Likes

Yeah. You are right, we have to migrate to microservice at the first opportunity, this is the best choice for managing services at scale

To be honest I know little of what the young'ns call "micro services" now days.

But we do, for example, use the NATS messaging system to feed work into a number of services, on different machines, such that one of those services will grab each packet of work and crunch on it. Feeding the results back into NATS for whoever needs to consume the results. If things get over loaded we can add more backend servers.

We set this kind of thing up manually, I guess we should look into how to automate all that at some point. If it becomes necessary to scale things up even more.

2 Likes

It's the same idea as creating lots of little programs and tying them together using pipes.

Except these programs may be running on different machines, and instead of pipes we add 1-3 orders of magnitude of complexity by using networking and managing the fact that we want to swap out some of those programs on the fly instead of stopping the system, doing an upgrade, then starting again.

It sounds like you do something similar, except instead of using tools like Docker and Terraform to provision resources, configure networking, etc, you just do it by hand.

2 Likes

Sort of. Back in the day of connecting processes up with pipes it was all happening on the same machine. For the purposes of composing bigger systems out of smaller systems.

Where as, connecting services up across some network is about scaling up the amount of work that can done by employing more machines and/or providing fault tolerance.

That 3 orders of magnitude of complexity part grates on my mind. Which is why I shy away from the likes of Docker, Kubernettes, whatever. Heck we are already using three orders of magnitude more software than we actually need just getting things to run on one machine, what with the enormity of the Rust compiler, the Linux kernel, the VM we are stuck into, etc, etc.

1 Like