Distributed event driven networking with Rust


Hi Rusters! (Repo - https://github.com/treescale/treescale)
For the past 1 year I’ve been working on implementation completely distributed event/data distribution system containing both Queue + PubSub functionality, but without centralized point in infrastructure.
I’ve implemented this principle already at PicsArt.com for their real time image processing system, and I’ve built https://treescale.com private container registry which is in avg. 4-8 times faster than existing open source Docker registry.

For the past 2 months I’ve rewritten my Go implementation with Rust + MIO, and got about 2.5 times better results in terms of memory usage and data handling capability.

But because of this is my first Rust implementation for production cases, wanted to ask some help or tips how to optimize Rust code and share some good practices how to write scalable Rust code for having more modular project structure.
I’m abstracting over 1 structure using Traits, but wanted to hear from community people “am I doing the Rusty code design or not?

Hope you will like the idea and philosophy behind TreeScale :slight_smile:


The tagline is a bit ambiguous:

Event/Data distribution system with 0 configuration and data delivery guarantees

Does it or does it not guarantee delivery of data? This isn’t mentioned anywhere in the README. Also it somewhat implies that it’s not at all configurable.


Event/Data distribution system with no configuration required and guaranteed data delivery.

Not guaranteed:

Event/Data distribution system with no configuration required, but no guaranteed data delivery.


This is the right one!
Didn’t thought in that way, thanks a lot for clarifying.


Someone else is working on one of these and just announced it yesterday, considering I’m just about ready to start implementing something like this for my decentralized network… awesome!!


Just curious what is your use-case for this type of distributed PubSub system?
And why you are building this from scratch instead of using Redis PubSub for example :slight_smile:


Presumably you are talking about Rabble here (full disclosure, I am the author of Rabble). While superficially similar there seems to be quite a bit of difference between the two systems. They both are intended for different puroses also as far as I can tell, and both seem very useful to have in the Rust ecosystem. In order to try make my point without an over abundance of prose or value judgements I’ll just list the differences between the two that I gathered from a quick reading of the treescale code.

  1. Treescale is connected in a more traditional peer to peer manner where nodes are not connected into a full mesh. Events are published via multiple hops using some shortest path routing in the overlay network given by the placement of the nodes. Rabble is a fully connected mesh, where each node maintains a TCP connection to every other node, and messages can be sent directly between any pair of nodes.
  2. Treescale is an event based system. Rabble is a message based system.
  3. While both systems can route messages/events between nodes, Each node in treescale system is a specific addressable endpoint that has API clients. In rabble, there are multiple lightweight processes and thread based services that are addressable on each node. This provides an actor based abstraction similar to Erlang and Akka.
  4. Messages in Rabble are strongly typed. Every actor in the system uses the same global Enum as a message and processes a subset of the enum. Users of the rabble parameterize this type to make it extensible beyond what is used internally to the library. In Treescale events are untyped. They contents are all of type Vec<u8> but have a corresponding string name. This allows any node or client that understands the message type to decode it as appropriate.
  5. Rabble was intended to allow writing distributed algorithms, particularly consensus algorithms, or CRDTs where the algorithms themselves could run across nodes, or on actors in a single node. For testing purposes of the algorithms, the actors can be called as pure functions and scheduled via test scheduler without needing any networking capability whatsoever. Treescalle appears to have a primary goal of being a general pub sub routing layer for use by clients and not for having client write distributed algorithms that run on each node in the system.



Hi @tigranbs. Treescale looks really cool and useful. From my very quick reading the code is very clean and readable. I was able to quickly grasp how it works and what is provided in about 5-10 minutes. It’s always nice seeing things like that :slight_smile:

It appears you have the networking and routing down pat, and the approach looks like a good one. It also appears like you are going to get started making the events persistent to provide a type of queue. This is a great idea! Note that if you plan on providing true lossless queue, you will have to handle network partitions as well as node failures. You don’t want to end up in a situation like RabbitMq where network partitions cause incorrect behavior (split brain). Kafka gets this right as far as I know. If the events are best effort then the problem becomes much easier. Making a distributed consistent queue is extremely challenging, so I encourage you to look at other existing systems and read the literature to see where you can avoid mistakes made in the past and improve on the state of the art. Having this type of system in Rust would be a great boon for the ecosystem and I am rooting for you to pull it off!

One thing that did concern me however, was some of the claims in the documentation. As a long time developer with a background in distributed systems, seeing statements like 0 cost unlimited scaling immediately set off red flags that this marketing is either misguided our deceitful. Instead of statements like this I encourage you to directly provide examples of your scalability. For instance, in some scenarios, routing may take events across multiple nodes, while in others it may send them directly. This variance, and any persistence to disk is defintely not zero cost, and any latency stemming from this should be explained to the reader in an up front manner. Also, you should define what exactly provides for the lossless guarantees mentioned in the docs and how these impact scalability.

Next, you argue against horizontal scalability, but that is exactly the goal you should be striving for. It simply means that as you add more nodes to the system, the system scales allmost linearly. You can’t do any better than that. Also, there is no reason that horizontal scalability is tied to a request/response system at all. Any number of messaging patterns can be used, as that is orthogonal to the underlying characteristics of the network and more dependent upon algorithm and CAP theorem tradeoffs.

Lastly, I’d encourage you to reword the language talking about Master and Slave. There really is no need to use those terms anymore, and they are triggering and counterproductive to inclusivity in your project. Instead, use words like primary and backup or follower, and when the two are combined you can call them peers.

To summarize, great code so far, but be a bit more humble and descriptive in your documentation.

Best wishes,


@ajs thanks for the awesome feedback and detailed explanation.
It’s true, I need better documentation and of-course examples.
Use cases with Rabble is different, but I think it would e great place for me to take a look some implementation, for not inventing a new wheel :slight_smile:

Many many thanks man!


You’re very welcome @tigranbs.

Rabble is certainly not the be-all-end-all of distributed systems, it’s just one approach that can be taken to write distributed algorithms where user code runs on the nodes in the system. I’d be happy to have you take a look at the code and open issues or ask questions.