Hey friendly Rust folks! I hope this is an appropriate question for this forum, first time checking it out.
Context: I've been playing around with building a hobby OS in free time, and after laying out the basics (64-bit mode, virtual memory / physical memory allocation, kernel heap, multitasking with preemptive scheduling) I'm starting to think about building a user-space. Rather than poorly reinventing the posix wheel, I decided to try out a pretty atypical architecture.
Specifically, I'm imagining all user-mode processes actually run in one shared virtual memory space. Usually this is a big no-no, because any old process would happily be able to interfere with any other, but that's where Rust comes in. If I understand right, any purely safe-Rust code cannot "break the memory sandbox" so to speak. So if we imagine a deployment system in which applications are distributed via a trusted build server that guarantees building with something along the lines of
--cap-lints forbid -Funsafe-code and produces PIE output, then these applications could be loaded at non-overlapping locations in virtual memory and trusted to run side-by-side. Of course eventually they would need to call into kernel syscalls, which must use unsafe C ABI calls, but these could be wrapped inside a safe syscall library linked in by the trusted build system.
So question 1: does this seem like a tenable security model? Does it seem likely that one would need to safeguard against various edge cases, weird build requirements, unexpected compiler flags, etc. in a way that quickly becomes intractable? It seems like trusting rustc to sandbox user-space applications if they can be guaranteed to use only safe code is easy to buy, but it's not so obvious to me whether this "safe-only" guarantee is blanket enforceable in a nice way.
The other half to this question has to do with getting these processes to talk to one another. Working with a single shared memory space doesn't seem particularly useful if processes basically just communicate through IPC or not at all anyway (sure, maybe we minorly reduce overhead in context switching, but that hardly seems worth the pain). But my gut feeling is that working in one shared memory space could open up a more natural "thread-like" model for processes, in which the kernel may only be required to help processes pass pointers to establish a shared channel, whereas the default mode of communication could then look more like the high-level thread communication available in Rust. E.g. once an appropriate channel is established, objects of complex type could be passed across without serialization over internal sockets or complicated memory mapping procedures.
The difficulty then translates to ensuring that the typechecking in two distinct Rust programs can somehow cooperatively ensure that only objects that are understood as the correct type on both ends are passed (to preserve the safe Rust guarantees). I think something like
#[repr(C)] can ensure that structs are arranged in a common format, but more basic that than, it seems to me that the "libkernel" API (safe Rust wrappers around syscalls) would either have to specify fixed formats of objects that it knows how to set up channels for, or there would need to be some runtime way(?) to check that the types match on either side.
Question 2 then is do you have suggestions for designing the type system to make sure we are "impedance matching" the object types at either end of a kernel-established channel (or other mode of communication)? It seems not too difficult to offer libkernel functionality to construct channels for specific types, and maybe just providing a few building-block types would be sufficient, but I wonder if there's a nice generics approach here. I guess fundamentally the processes at either end are built with different invocations of rustc, so maybe this is impossible?
The final issue (that I've thought of so far...) is that in the usual process model, there is a process-local heap available to allocate from for each process. For example the global Rust allocator that is used in
std should somehow talk to the OS to get at this heap. This heap is also conveniently cleaned up when a process dies, so even if the process happily leaks a big pile of memory, it doesn't endanger the OS. In this shared virtual memory model, one could do the same, but then we run into allocation lifetime issues: if an object was created by process 1 and ownership was transferred over a channel to process 2, then we run into a problem if process 1 dies while process 2 continues to try to access the object. On the other hand, if we keep one big heap for all processes (which could work in a single memory space), then lifetimes are safely managed through Rust's lifetime analysis, I think; but now, any application-level memory leaks do not get cleaned up even when the process dies, and instead become OS-level memory leaks.
Then Question 3: Could this issue be mitigated by defining custom Box types that use an allocator drawing from persistent OS-global heap memory? E.g. only objects needing to be shared between processes would be constructed with this Box type, and the usual ownership rules would apply to ensure this sort of memory is cleaned up. The same memory leak worries persist, but become localized to code that attempts to share things between processes, which might be rarer. Alternatively, is there perhaps a simpler solution to managing this memory across multiple Rust execution units that I'm missing?
Overall, this is a pretty unusual use-case for Rust (for good reason...), so it's been difficult to find literature on how to get the compiler and type system working in our favor. Hopefully the experts here have some thoughts. Thanks for any insights!