I suppose they're referring to the lack of
unsafe in Rust. They're also referring to it pretty explicitly:
- Rust is somewhere between. It's safe by default, except:
- It has unsafe code which allow unsafe operations.
- It allows bugs in unsafe code (and FFI 5) to trigger memory unsafety in safe Rust code.
- A Rust program will bring a lot of unsafe code in via dependencies.
I don't think this is the fair comparison: somewhere, sombody will have to do something inherently memory-unsafe. In Rust it's std and the like with
I take their claims with a bit of salt; it's notoriously hard to get all three (easiness, safety and performance), so hard that I think nobody got it right up to now (Rust is very good at safety and performance, and I think it's also good enough for easiness, but this is not always the right trade-off). I don't believe this is possible completely: to be maximally easy-to-use/learn, safe & performant is a dream. But maybe they did find a good enough balance, better than anyone else. As long as I haven't use the language I don't know.
I find this claim somewhat strange:
This is cheaper because programs dereference less than they alias and dealias: our sample program had 4.7 million counter adjustments, but only 1.3 million liveness checks.
It doesn't line up with how I see Rc used in Rust, typically I assume Arc/Rc are cloned a couple times and then used (in parallel) many more times. I don't believe the claim that making cloning cheap by making deref expensive is a net performance win.
Where was this claim found?
Anyway, this may be true if the language doesn't have reference and thus any passing to function etc. requires a refcount bump, but that doesn't mean you deref less, only that you increase refcount more more. Making refcounts faster at the expense of derefs may make such program faster compared to the other choice, but not compared to a language like Rust.
I spent some time reading around their site to get more of a sense of the basis of what they are claiming.
As I read through the site, I saw their notes on structural concurrency in Rust (Seamless, Fearless, and Structured Concurrency), and the author is wrong there.
Every immutable data in Rust is
Sync. There is no exception. But
& is not necessarily immutable, because we have
UnsafeCell. Without it, we could get rid of
Sync. So Rust allows message-passing style, and further allows mutability when it is not dangerous. I see they're talking about mutexes etc. in part 2, but I haven't found the article (maybe it haven't been written/published yet). So they'll either have to deal with the same problem, or sacrifice safety, or just special-case all synchronization primitives meaning you can't write your own.
Moreover, Rust abstractions are powerful enough to allow what Vale does as a language construct (
parallel) as a library construct (
rayon). Most hard-to-grasp things with Rust threads, in my experience, are actually around lifetimes, something that isn't inherent and will be at least partially solved with scoped threads.
Some other notes:
A function that accepts a read-only region will actually be generated twice: once with and once without the assumption that the read-only region is immutable and can therefore take advantage of the immutability optimizations. (Seamless, Fearless, and Structured Concurrency)
Seriously? That looks terrible. Strict aliasing guarantees enable richer optimizations, that's true, but duplicating the code for each option is not going to be worth it. Not at all.
Edit: This is exponential growth!
About Safe Externs, so you just give C the memory unsafe work if I understood correctly? This is much worse than isolating
unsafe blocks. And no, UB is not local, and you cannot limit it without sandboxing even if you try really hard. And sandboxing will effectively rule out extending perf-critical primitives (too high overhead), meaning you'll either have to extend the runtime, growing to a very big runtime eventually, or you'll just stay with the non-maximally-efficient thing we have.
I'm highly skeptical of the claims made by the author since most features of the languages are not even completed yet, but the targets and some approaches are quite interesting. It seems that they have done some work to reduce the overhead of RC with the help of borrow checkers, I'm just wondering if this can be transferred to Rust.
Higher RAII: OK so they have linear types (and a new invented name for them, because why not). Nice. Definitely. I wish Rust had something like that. It's not clear to me how they handle cycles, but it's still nice. However, it's not a deal breaker, and maybe, maybe some day Rust will have them too.
Use static analysis to reduce the number of liveness checks as much as possible. For example:
- Automatically track this information through intermediate stores/loads from struct members, where possible. (Vale's Hybrid-Generational Memory)
How exactly? You cannot do that at all without aliasing information (unless they plan to monomorphize on immutableness, but like I said this is infeasible and will lead to code bloat explosion, exponentially to the number of references).
Vale has three release modes:
- Resilient mode, which is fast and memory safe; it will halt the program when we try to dereference a freed object.
- Assist mode, for development, to detect potential problems even earlier.
- Unsafe mode, which turns off all safety.
I highly hate that. I know Zig does something similar, but while I have skepticism regarding it too, Zig does not try to be memory safe IIRC, only memory safer than C and simpler. So unsafe mode is the default, and safe mode is for dev mostly. But trying to enforce memory safety with (a lot of!) runtime checks, then provide a way to disable them and say "we're safe", no, you're not. The worst memory safety bugs (and security vulnerabilities) happen in production.
I stopped reading. I don't really believe they will be able to do it, but I'll be happy to be surprised.
No, the definitely worst thing is behavior intended by attackers. This is something JS mostly doesn't have, in this context (of course, every software has bugs and no language can prevent security exploits).
this straight in JS (it's not always the instance of the class a method was called on, especially in closures), what happens when there's an edge-case bug and
I find it interesting that they have all these claims about safety. At least a language like Ada has tons of research and proof to back up its claims. And lots and lots of production code. Can't forget that either.
This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.