Negative views on Rust: language-based operating systems

I'm not disagreeing with you. I'm just pointing out that it's misattributing the source of the lost performance.

...

You know, I do agree with you. A 100% bullet proof compiler checked OS like Theseus is predicated on having a 100% bullet proof language syntax and semantics that enforces the protection and of course a 100% correct compiler implementation. And I agree that nothing is 100% correct and so this may end in tears.

Hmm... The offer on the table with Rust is memory safety. That is all an OS like Theseus is asking for. If Rust has a failure in providing memory safety then that is a bug that needs fixing.

I have a challenge then:

Write a Rust program that endlessly runs two different functions in two threads, async or sync. Call them alice and eve. Arrange that eve can read and/or write data belonging to alice. That is to say data that is not actually given to Alice and Eve by the caller for mutual access and without the use of unsafe.

Re async: My understanding is that async is introduced into languages like Rust and C++ as a consequence of the C10K problem: C10k problem - Wikipedia

Historically we forked processes in order get work done in parallel. For example Apache forked a process for every request. This of course is horribly slow and memory hungry. So later we moved to using threads. Not as heavy as processes. But still things like Apache could not handle a lot of connections as the memory piled up with the thousands of connections of modern web services piled up. Most of those times with the threads just waiting for some input. The solution then was to move to async. Async relieves even more of the time and memory overheads of thread swapping. Async is good for waiting on lots of things at the same time. There is a reason nginx is renowned for its performance, it does things asynchronously.

It's worse than that. It's like “100% bullt-proof plan to colonize the solar system” — where everything is predicated on having that nice cheap rocket. Only someone else have to provide the rocket.

But isn't it putting horse before the cart? It may sense as some kinda theoretical plan, but before you would invest heavily into that thingie you need to know how and how to make that nice cheap rocket!

But who would fix these? For compiler developers bugs which lead to miscompilation of code (and violation of Rust guarantees) are only high-priority bugs if they are triggered by some common idiom which is triggered by common C++ code. Rust compiler have a nice collection of workarounds which are carried for years because they are not considered “important enough” by LLVM developers.

But Rust developers, in turn, have similar attitude to the Rust input! They wouldn't just go and fix some crazy-obscure-code compilation just because it can be used to violate some invariants. They may keep these unfixed for a long time. The fact that they are not affecting too many people would mean they would be assigned low priority.

Why would I write such a program before I have a chance to get some money from your system? If you want to see how that could have been done in older versions of the compiler — just look on the issue tracker. There are dozens of such bugs. If you want something like that to crack that OS — then you can build such code in a few days by playing with one fuzzer or the other.

One of the solutions was to move to async.

Sure. But it's hard to modify it, it's code is very hard to read and edit and, if you can change the kernel, there are another option: change scheduler.

Google did it as I pointed out above. Google Fibers are threads, they don't need all that async hoolpa! And if you plan to handle more traffic than Google… I would be very interested in knowing what do you plan to build and who would finance the whole thing.

Ok then. In Rust I should not trust.

Except I do of course:
When handling many sessions containing sensitive data in multiple threads in a web server.
When using millions of lines of umpteen crates from untrusted vendors.

Hope it does not end in tears.

More like "only a fool doesn't rely on the security principle of 'defense in depth'".

(i.e. Always assume your protection will be broken and pile up enough that, hopefully, the attacker can't find the holes in all of them at the same time... sort of like how they combat the rise and spread of antibiotic-resistant bacteria by giving you a blend of two or three antibiotics and insisting that you take the whole course rather than stopping early.)

Note: this is the point where discussing language-based operating systems split off from discussing the chrisdone.com blog post

It will, of course. Just on much smaller scale. Coz most of the code there is not written by potentially hostile adversary. Signarure/certificate of authority works in such a case.

When you try to use that for the OS/web… near the end of JavaApplers/ActiveX era I think most of Java applets and ActiveX components were trojans of various sorts.

Rust does help with safety, which is a subset of security. But the language is not designed to be hardened for security critical environments.

I'll admit that the research around trying to get the language to be more secure is interesting. But security is a very broad class of issues. Anyone can write an insecure hash algorithm that the compiler will be unable to verify, for instance. Accepting a password as a CLI argument is a common security vulnerability.

A dumb analogy for the difference might go something like this: Safety is provided by seatbelts, and security is provided by door locks; seatbelts can protect people from accidents, and door locks can protect people from bad actors.

1 Like

This challenge was quite fun and tricky but I eventually found a way to do it. The most important trick was to exploit Implied bounds on nested references + variance = soundness hole · Issue #25860 to transform &'a T to &'static T just as the fake_static crate does. That can then be combined with enums to implement transmute with only safe Rust code, something like:

pub fn transmute<T, U>(e: T) -> U {
    // Store a None `U` value and get a reference to it:
    let mut value: Result<Cell<Option<U>>, Cell<Option<T>>> = Ok(Cell::new(None));
    let u_ref: &Cell<Option<U>> = if let Ok(v) = &value { make_static(v) } else { unreachable!() };
    // Store a Some `T` value:
    value = Err(Cell::new(Some(e)));
    // Use our previous `U` reference to access the stored `T` value:
    u_ref.take().unwrap()
}
7 Likes

Unlike most of commenters, I'm optimistic on the approach and I think it's definitely worth researching.

I think Rust proven that ownership model allows native-speed memory safe code, and it only logically follows to attempt what languages like Smalltalk and Lisp were trying before - merging a compiler with OS, but this time with native performance, and very low overhead.

As for bugs in the compiler - I think to make this approach viable, the compiler infrastructure might need to be created from scratch to support it, including rethinking the language itself. Right now Rust might be good enough to prove viability of the idea, but the compiler implementation is assuming that memory-safety is important but not critically important. Had it been all (including language design) built to ensure verifiable correctness, things could be more robust correctness-wise, even including runtime checks for certain cases, removal of certain features and so on.

1 Like

That's what Sun tried to do. Spent billions and gone bankrupt. Problem wasn't solved.

More robust wouldn't be good enough, sadly.

For a security border to be viable you need very rigid and stable set of rules which can be reviewed and verified by security researchers.

For a compiler to be speed-competitive you need to change list of optimizations/transformations without keeping these stable.

These two sets of goals are conflicting: if you demand state-of-the-art optimization techniques then you, invariably, make the protection leaky (which mostly defeats it's purpose — kudos to @Lej77 ).

And if you are Ok with losing 20-30% of performance then you may use dedicated security border (hardware-provided or something like Native Client) which is much more flexible and usable scheme.

Well… it may never become a general-purpose operation system where you can run potentially hostile third-party code, but it may be fine as limited OS where all code comes from trusted source and we don't plan to fight against hostile adversary, but just want to mitigate errors. In that capacity it may be quite usable.

1 Like

FWIW, this is exactly the use case targeted by embedded devices and safety-critical systems.

In these scenarios, the developers are the people writing the kernel and they can assume code being executed was written with good intentions. However, they still want to avoid bugs and memory issues, so it's beneficial to take the belts-and-braces approach you would get from combining the Rust compiler with your normal safety-critical design practices and hardware features like memory isolation or watchdog timers.

6 Likes

Sounds like exactly what Oxide is doing? On Hubris And Humility - Cliffle

Well, they tried to do that with Java, so yeah...

I wonder how many billions of dollars was spent on design of all the CPU architectures that were affected by Meltdown and alikes.

I don't really know if hardware enforced security solved anything either. It did seem to work for time being, that's all.

Which a minimal well-specified core language would be perfect for. On top of which one can define a pragmatic user-facing language that internally is transformed to the core language representation.

Nvidia's ARM CPU implementations (like Denver) have been on the market for a decade, basically running a whole shadow OS compiling ARM machine code, and then utilizing runtime profiled guided profiling to rewrite microcode on the flight, which seemed like some evil complexity sorcery when I first learned about it. And that CPUs are used in cars and stuff. And ARM architecture was not specifically created to be JITable and software-defined.

Considering how many resources are being spent on hardware verification and what kind of crazy stuff is already happening under the hood in hardware world, I don't understand why would any language-based enforcement approach be dismissed as not feasible.

Also - the fact that a OS-language like that wouldn't require hardware support to enforce security, doesn't mean it couldn't use it when it's available.

The distinction between hardware and software is already very blurred. Ownership model is a breakthrough that unblocks possibilities that were previously unpractical (because the price of the GC is too damn high!). All skeptical concerns are somewhat valid, but none of them is fundamentally unsolvable.

I think this was partially your point, but... this does sound like how machine code (assembly, whatever you want to call it) works in practice. (Well, maybe minus the well-specified and provable-correctness bits...)

Hardware does miraculous things with machine code to be able to do what it does (like, honestly, some instructions literally cost zero cycles in common usage, how is that even possible‽[1]), some of which is being walked back because it was too good to be correct (Spectre, Meltdown, etc.). But in general, machine code is a language which is itself interpreted by the hardware in a not-strictly-linear fashion.

Further blurring the line between software and hardware are the developing techniques for developing hardware basically as software, then "printing" your program into hardware. (I've not played with it, so I don't know how it works in practice, but hearing people talk about it is a trip.)

There is a very decent chance that in the future there could be a CPU with a higher level language than pure assembly as the language you talk to the processor with. I agree that it probably won't be Rust (it'll be a purpose-built language, in the way that assembly purpose-built), but Rust is a good medium for proving the concept. (While it probably also won't be wasm, there are similar experiments with wasm.)

(GPUs were sort of this for a time; you fed the GPU (drivers) some textual code (GLSL, HLSL, etc.) and the GPU (drivers) ran that high-level code (after internal preprocessing, of course). It's only semi-recently that pre-compiled shaders have become somewhat commonly supported.)


  1. I know basically how this is possible, you don't have to tell me ↩︎

1 Like

The machine code is just a most primitive (thus simple) abstraction layer between hardware and software. We've built computing from first principles and we had no better means at the time, but there's no way it was the best abstraction we can ever have.

That is not true for ages now. Right now machine code is just a primitive serialization format describing the desired computation. It's still used everywhere because we have a whole computing world built around it, and little effort was put into finding suitable replacement. It's not fundamental for the hardware itself, and not fundamental for the software itself.

High-end CPUs now have built-in full-time compilers and JITs for a decade and more! They could really be fed something higher-level that is more abstract and thus compact, and figure out how to execute it best on it's own.

And when they are not capable of such things, it could be done beforehand for them. Things being done in hardware or software - it's all fluid.

It's just that such an alternative description of computation has to be practical and even better, which previous GC-based attempts were not.

Because modern CPUs do not actually execute any machine code instructions (except for the most primitive ones). Machine code is just a primitive serialization format describing the computation. CPUs take the machine code and internally compile it for the actual underlying hardware that changes from model to model. One can argue about meaning of "compile" here, but broadly speaking - transforming one representation of computation to another.

In the future (and in the present already) the distinction between hardware and software will be just an implementation detail. Is your computation compiled by the OS, or CPU itself? Does it execute on an integrated FPGA, or GPU, or on something more CPU-like? It's not irrelevant, but it will be more of an implementation detail.

I think ownership model is going to be a fundamental primitive in such world, because it allows high-level description of computation that maps to how the hardware actually operates: e.g. supports in-place mutations. And all that without sacrificing integrity and/or the performance. Rust proved that one can have a language that is both high-level and memory safe, yet performant like C/raw machine code. Something that was not possible before.

Isn't that description of bytecode, essentially?

Because developers of the compilers are not designing them with such usage in mind.

And if you would start writing new one from scratch you would need many years (and, most likely, billions of dollars) to match performance of LLVM/GCC/etc.

We don't know that. We just know that previous attempts failed.

Very few exploits which plagued the JVM while it was able to run unsigned Java applets were related to GC.

Note that this changed when GPUs have stopped being exclusive resource and started becoming shared resource.

I'm still not sure how well that works. I know that it's easy to crash the Android Emulator, e.g,, if you feed it with bad requests. It's not considered a problem because emulator is a tool for developers and developers don't need robust protection from themselves.

I'm pretty sure that would be the best we would have for a very, very, VERY long time.

Sunk costs. Similarly to how we can not ditch QWERTY and even using it on the phones where it's not that hard, theoretically, to introduce any other keyboard.

But then you would have to ask billions of people to relearn to use it… and that is not happening.

No, they don't. They split machine code in the primitive instructions (even simpler than what you see in assembler) and these instructions are, then, executed in parallel (if possible). What they are doing is far cry from what compilers are doing. There are some peephole optimizations (called instructions fusing in CPU), but that's it.

Not even remotely close to what contemporary compilers are doing.

Sure, aforemetioned Crusoe/Denver/E2k tried to use software and compilers to achieve better performance by using more complicated compilation techniques. Failed. Intel tried and probably other had quite a few similar internal projects. Failed, too.

That is unknown yet. Attempts happened, yes. Lots. With numerous failures. We have yet to see any actual successes. And if the best “the new and shiny” approach may offer is “almost like what we had before, but not quite”…

That's not enough for the general-purpose OS. You also have to provide a way to run already written code.

No one would rewrite billions of lines of the already written code.

To some degree. Software is no longer fluid. There are just to many of it and it's to hard to alter it for any approach which calls for the full rewrite of everything to be practical.

That's what they said half-century ago, before most of forum users were even born (including me). There are lots of artifacts of that era. Like here you may read about the way Xerox Alto did that (programs could load microcode and thus alter the CPU itself, cool, isn't it?), here you can read about Google's attempt (just, please, don't miss the note about abandonment of heterogeneity).

Time will tell. I'm not saying it's absolutely impossible, but when someone tries to preach that half-century old idea to me and tells that some minor twist may make something which has already failed dozen of times in practice would become mainstream soon… I tend to be very cautious.

Rust cracked very serious problem, indeed. But without active sabotage of C/C++ compiler writers which turned somewhat-dangerous-but-pretty-robust-language into venerable minefield… I don't think it would have went anywhere.

Do you envision something similar with OSes? I, for one, don't see anything like that: while new versions of OSes may demand some, small, changes to the existing programs — they don't ask for the full rewrite.

I think last such attempt was when Microsoft under Ballmer tried to push UWP as a replacement for WinAPI. This attempt, essentially, made Microsoft lose the smartphone market and Ballmer was replaced as CEO, after which we got the usual conclusion which, of course, made any radical changes to the status quo even harder.

1 Like

Putting aside the problem of ensuring a bug free compiler in order to build an operating system that enforces memory protection at compile time (not to mention a bug free OS itself), I woke up this morning with a nagging thought....

Currently the Rust compiler depends on LLVM, the is to say it depends on C++.

That means that it is impossible to run the Rust compiler under Theseus.

Unfortunately that renders Theseus as not an operating system in one crucial respect, one cannot operate it by developing Rust code on it. Especially unfortunate given Theseus is written Rust. Rather it is reduced to a mere run-time for Rust code built elsewhere.

Is there any hope of Rust ever being a first class, self hosting language?

There is rustc_codegen_cranelift. It has been part of the main rust repo as a subtree for a while. rustc can be compiled with it AFAIK.

1 Like

I'm aware of that. And agree with it. Obviously my Rust programs can be totally resistant to parts of my program accessing memory it has no right to be accessing in other parts, whilst at the same time my program could easily be logically wrong and leak secrets or allow access to unauthorised connections etc, etc,

However I think this gets a bit meta when the OS is written in Rust and using Rusts memory access rules to enforce isolation between other Rust programs. In this case the OS is leveraging Rusts memory safety rules at compile time to provide the security of process isolation.

The idea is simple. The OS basically boils down to a function:

fn os () {
    // Lots of stuff.
}

The applications loaded and run by os() also boils down to functions:

fn app_1() {
    // Lots of stuff.
}
fn app_2() {
    // Lots of stuff.
}

Clearly the Rust memory rules do not allow code in app_n() to access anything in other apps or os(). They can't even see each other. Not unless it is some capability given to the apps by the os through parameters or whatever. Ergo we have process memory isolation.

Now, if app_n() and os() were all part just of a program you were building, just regular functions and methods, you could rightly complain if you find code in one of them could affect data in the others. That would be a hole in Rusts memory access rules. That is to say a bug that needs fixing.