N00b question - what is the Rust'ish way of programming serious event-driven programs?

Isn't it just Rc<T>? What advantages does Rc<Box<T>> have?

1 Like

The only advantage is that you can move something into Rc without changing its address, which C++ has to do (just in case, since memcpy-like move semantics aren't guaranteed there). Other than that it's all downsides caused by extra allocation and costly indirections. I've written it this way only to show a closer equivalent to C++'s one.

Yes. In general, as I argued in one of my talks (IIRC it was @CPPCON), events (more generally - message-passing architectures) are the most efficient way to implement pretty much any kind of an interactive program (the only exception I know is HFT, and it is a pretty narrow niche). Very very briefly - it is either message-passing, or thread sync, and costs of thread sync on modern CPUs are huuuuuge (if we account for cache invalidation, we're speaking about 10K-1M CPU cycles per thread context switch :frowning: ); this in not to mention that programming at app-level with thread sync included instantly leads to cognitive overload (overloading magic 7+-2 registers of our brain), which in turn leads to enormously buggy programs we can all observe on our desktops :frowning: .

As of 2017-2018, this point (with some reservations etc. etc.) became more or less obvious to industry opinion leaders, now we just need to wait until they convey it to everybody-out-there...

If you want C+Ā±like unchecked pointers, Rust supports them too.

Sure, but what's the point to use unsafe Rust over good old C?

If you want shared_ptr<T> , thatā€™s Rc<Box<T>> .

No, I don't want shared_ptr<>; shared semantics is an almost-non-existing beast in real-world programs (and this is actually my Big Fat Complaint(tm) about both shared_ptr and Rc).

In practice, most C++ code would use make_shared() and end up with a single allocation holding the shared_ptr's control block and value, which is the same thing as what Rc (or Arc) does in terms of layout, but has the added advantage of using placement-new and skipping stack allocations of temporaries (which current Rust cannot).

1 Like

Because you can build safe abstractions around unsafe pointers. Vec uses "unsafe" unchecked pointers, but it's safe as a whole.

BTW, message-passing can be implemented in a borrow-checker friendly way. The Rc<> shenanigans are required only to arbitrarily add and remove event listeners. But if you limit lifetime of both ends of a message queue to a scope, then there's no need for refcounting. If messages own their payload, then it's also a simple case and the borrow checker isn't involved either.

I wanted to go back to these two premises. Why does a POI need to have a back pointer to its PO?

I think in GCā€™d languages, notably ones with a tracing GC, someone may indeed just stick a reference cycle between the two and let GC arbitrate it out.

In C or C++, one may just do the same except be a bit more careful in how they manage the values manually.

In Rust, I may rethink the design and ensuing APIs to keep the reachability frontier unidirectional - that is, PO owns the POIs, POIs have no back pointer, and instead I pass the PO (or some subset of) around to where itā€™s needed.

Messages do own payload, and sure they're easy to handle, but the problem is not with the message, but with a complicated state of the (ad hoc) FSM (see OP, where IIRC you yourself suggested to use Rc<> not for the listeners, but for the good old state present in millions of programs out there :frowning: ).

Have you read about futures, async/await and Pin? (sorry, I don't have introduction-level links right now) This field is under heavy development, but in future it should became a "way-to-go" for event-driven programming in Rust.

Essentially you create heap-allocated FSMs (see Generator), which are able to keep self-references thanks to Pin, and which are executed inside event-loop.

Yes, this does qualify as a reason - but only as long as I can say that "ALL the app-level code can be safe"; at the point when we have to use unsafe code at app-level - the whole safety protection falls apart :frowning: . That's exactly why I'm trying to find a Good Way(tm) to handle FSM States (as the first step - with backpointers...)

I don't know much about it, but as somebody here has noted - pinning does require unsafe code to express FSM state, and if this stands - it means unsafe code at app level -> which is a Very Bad Thing(tm) maintenance-wise :frowning: .

Why does a POI need to have a back pointer to its PO?

Fast traversing. Moreover, such pointers represent an _extremely_common case in data structures (should be even in Knuth Vol. II).

EDIT:

In Rust, I may rethink the design...

Good (and efficient) design doesn't depend on the programming language. That's why books such as Knuth are pretty much eternal (well, with some reservations :wink: ).

No, unsafe bits will be contained either by libraries or language. User code will deal with safe generators and async functions.

1 Like

Iā€™m not disputing you need to reach the PO when doing something with a POI, but Iā€™m wondering why that needs to be done through a back pointer? One must already have access to the PO in order to get its POIs, so why not keep that reference around?

But I think we all know this isnā€™t true in practice. There are certainly language agnostic components to an algorithm/design, but when trying to squeeze every last ounce of performance out, language definitely plays a role.

And if one takes this mindset to Rust, theyā€™ll have a bad time :slight_smile:

No, it doesn't.

First, this absolutism of "if there's any unsafe anywhere, we can as well go back to C" doesn't make sense. It's still better to have 1% of program that needs careful programing than 100% of program that needs careful programming. Every incremental containment of unsafety is an improvement, which is why even C has some typesystem, and C++ keeps adding incremental improvements.

You also seem to imply that if you need something "unsafe" in the higher-level app logic, then it can't be safely contained. I'd say that's generally not a problem. Exactly how you do it depends on the nature of the problem and specific application architecture, but you can find examples like Futures that can be pervasively used throughout entire application, and still maintain safe invariants. Rayon takes advantage of borrow checker and Rust's typesystem to add very high-level safe concurrency that can be safely combined with other Rust code and libraries unaware of Rayon.

And Rust sometimes really needs approaching problems differently. If all you have a Hammer++ and Nail-pointers, you won't be able to use Rust's screws the same way. "I've tried hammering screws but they don't work!"

There are OOP patterns that don't translate to Rust well, and the ones based on back pointers are among them. It's easy to think that a language which can't handle a doubly linked list (!!!) is useless, but Rust just does some things differently. If you insist on C++ -like solution, you may end up with C++ -like safety guarantees. If you flip the problem to fit Rust's way of modelling the world, it may get all of Rust's benefits.

Guys, can you sort it out between you? :wink:

@matklad was talking about the current situation, and I was talking about how feature will look in future. (see PR linked in my previous message)

First, this absolutism of ā€œif thereā€™s any unsafe anywhere, we can as well go back to Cā€ doesnā€™t make sense.

What I am saying is different; it is "if there's any 'unsafe' in app-level code, it means no safety guarantees at all". This is just the way big teams work - at the very moment when I am allowing 'unsafe' in the code-written-by-team - it will proliferate across all the millions of LoC, with no realistic way of stopping it. From my experience with serious million-LoC projects - the only way to ensure that something is not used - is to prohibit it 100%. What I can do is to say "hey, this is infrastructure-level code which doesn't need to be changed and which I can get a dozen of ppl to review once it is written" - and lock it to a special file/folder/... which never changes without 10 ppl reviewing it, but beyond this - my capabilities of enforcing things are extremely limited. So it is either "we DO allow 'unsafe' in our app-level code", or "we DON'T"; there is no grey area there.

If you flip the problem to fit Rustā€™s way of modelling the world, it may get all of Rustā€™s benefits.

That's the biggest problem I see with Rust - it forces developers to use Rust-specific approaches (that's at the same time when all the other mainstream languages converge - this includes such different beasts as C++ and Java, which enable very similar programming patterns - unlike Rust, that is...). However, I contend that the optimal solution of the problem is (most of the time) the same (more or less desribed in Knuth if you will) - and therefore, any language which forces to deviate from it, is, well, suboptimal... (NB: of course, this forum is not a good place to say such things :wink: )