Coming from a DB-heavy background, I don't think mmap is the primary issue, or even a significant one. The much bigger problem with writing a DB in Rust is… well, writing a DB. Especially production-ready (i.e. non-research) databases are large and complex (and in the worst case, downright complicated) codebases, which means that writing a DB from scratch is a huge effort. There are a lot of moving parts and a lot of opportunities to screw up.
My own PhD research is directed toward a single aspect of improving the state-of-the-art of modern database use: making the very top layer of DBs – the query language – strongly-typed and have such a strongly-typed language support real-world use cases (like recursive CTE queries that traditional ORMs refrain from supporting). Now even this tiny slice of databases is enough work that I've been pursuing it for more than a year and a half. And while there is nothing inherently "hard" in it (it's only application of industry best practices, after all), the sheer amount of detailed work it needs makes me wonder how many man-years of work should have gone into something as big as Postgres, for instance.
mmap is not the problem. Lack of resources likely is.
I will concede to you that DB covers lots of topics. That said, in any ACID DB, was need durability. If we are not writing the durability layer it is because someone else did. If we are writing the durability layer -- be it write-ahead-logs, bit-cask, lmdb, leveldb, ... mmap, though not required, is often useful.
Not to belittle read-only / analytical-DBs (where computer crashes can be handled by caching intermediate results & restarting), but IMHO high-performance RW DBs are significantly more complex; and a lot of that complexity has to do with designing data structures that are high performance in the common case yet operates in a way that can handle crash-recovery. IMHO, mmap semantics, especially during crashes, is absolutely key to that.
Speaking of smaller, file-backed DBs: SQLite didn't even support memmapping until version 3.7.X, and it can actually hinder durability and cause DB corruption, see e.g. the developers' own description of its use in SQLite. Note in particular that the problems that came up w.r.t. races and volatility aren't unique to Rust, because…:
When updating the database file, SQLite always makes a copy of the page content into heap memory before modifying the page. This is necessary for two reasons. First, changes to the database are not supposed to be visible to other processes until after the transaction commits and so the changes must occur in private memory. Second, SQLite uses a read-only memory map to prevent stray pointers in the application from overwriting and corrupting the database file.
So I find it very unfair to say that these problems arise out of Rust's semantics or design – they are more general and deeply-rooted questions.
It depends on what you mean by "unsafe". An mmap() memory region is just memory, you can manage that hunk of memory any way you want. In Rust we use unsafe to designate a function can not be validated by the compiler. Using mmap() is bypassing the language and managing memory yourself. There good reason to do this for program efficiency, but strictly speaking it is unsafe.
Assuming you have that hunk of memory how do you manage it safely? If it's private, then only your Rust threads can access it, so then mutex construct is fine. Others have mentioned disk swapping, but that can happen to regular text and data of your program in any language, shared libraries, and mmap() regions, unless you fiddle with the default attributes, the cache quality of mmap() regions requires no special handling. The page tracking algorithm in any UNIX will flush the L3 cache when it restores a page from disk, and your pointer will load the right address when it's scheduled after the page load. You'll need "volatile" pointers only if you have two threads (or programs) accessing the same mmap() page at the same time. (Or really any memory, it's just with mmap(), Rust isn't managing the memory for you, so it can't assess the volatility of your access.) But where mmap() gets really unsafe, is shared mappings, that is two programs accessing the same memory at the same time. There is no way the language can protect you from what the "other" program is doing to the same memory, it might not even be a Rust program Here you need a named POSIX mutex used by both programs, atomic operators, or perhaps spin lock. But the algorithm that manages the memory is at the meta program level, at not known at compile time, so the Rust compiler can not assert it's safety.
Separate from soundness there is the question of "practically what could the compiler mistakenly think won't change program semantics but actually does?"
If you're using mmap for shared memory communication, there's no guarantee about how long it takes your process to observe writes from another process. In practice the OS and the CPU aren't going to create artificial delays, but scheduling can change whether your reading process sees a particular change on the next read or not until many reads later. You will already need to have thought about how the communication will need to avoid data races even before Rust's memory model comes into play. You could be using shared memory mutexes or you could be doing something lockless (e.g. depending on aligned writes being de facto atomic on your target architecture), but you have to have a scheme.
So say my reading process and my writing process both mmap the same region and both get a &mut to the struct stored inside. AFAICT the fear would be something like the reader reads a field, memcpys data from elsewhere in the shared region, then reads the field again, and compares the two reads. Rustc/LLVM may decide to optimize away the comparison on the assumption that the field could not have changed in-between.
What is the implication of this?
If this is a "mutation count" field that the reader is checking to make sure the writer didn't update the other memory while it was being memcpy'd, this could break your synchronization scheme.
Theoretically the compiler could give you a stale value indefinitely. In practice this is only going to happen in small loops because registers are scarce and the optimizer wants to aggressively reuse them.
What would trick the compiler?
Cell would not fix this -- the compiler can definitely still determine you didn't use a reference to change the field in-between the reads assuming everything is in one function or inlined.
In practice an Atomic field is going to prevent current compilers from assuming both reads must compare the same because current compilers don't have the global analysis pass that would be necessary to determine nothing else in the program ever writes to the atomic in another thread.
But only a volatile read really conveys the fact that it's expected the memory could be mutated by something entirely outside the process. But in Rust that causes a copy because Rust doesn't have a concept of volatile pointers/memory. For single field volatile reads this doesn't matter, but it could matter for a big memcpy. But if you do a volatile read in a tight loop, I bet LLVM turns it into a volatile memcpy for you. I also believe there is a volatile memcpy RFC.
If you really want to break the memory model, map the same region twice in the same process. AFAIK Cell, Atomic and volatile all don't technically address this case. But again volatile will almost certainly work in practice.
Could you clarify that statement? Rust has read_volatile and write_volatile methods on raw pointers, so it does have a concept of volatile pointers/memory. They are explicit methods instead of an attribute (or "pointer type") like you have in C. These operations do not permit certain optimizations such as eliding reads or writes, as one would expect. 
The most recent conclusions I've seen on acquiring &T from mmap is that it just should not be done if another process may change the data from under you (e.g., because it is a shared mapping). Which implies that acquiring &mut T from mmap under the same circumstances is even worse; we're supposed to have unique access to this memory, but some other process also has access? Oh dear.
I was curious about whether the optimizer handles volatile copies as well as non-volatile copies, as you alluded to: Compiler Explorer In short, no it doesn't. The loop with non-volatile accesses can copy blocks of 512 bytes, reading/writing 32 bytes at a time (but with ILP, it looks like it may copy up to 128 bytes per cycle!) The function with volatile accesses only copies blocks of 64 bytes, reading/writing 8 bytes at a time (without much help from ILP... unless there are shadow registers for RAX, that would be cool! Traditionally those sequential read/writes would stall the pipeline. TBH, I don't know exactly how it works on modern architectures). ↩︎
I agree with the first sentence, but I don't think you need mmap, seems to me it would add complexity, or at least not reduce it, although it may allow multiple processes to concurrently and directly access the data ( rather than just multiple tasks ), if you really need that ( I think that is best avoided for various reasons ).
Edit: I just scan-read this paper ( via a post above, and it seems to confirm my doubts )
Yes, probably, but this doesn't change anything. Someone have took a crowbar and inserted it in the engine working in full swing.
It would either survive that (if you are lucky) or it would break (if you are not lucky), but without knowing how that crowbar was used it's hard to predict anything and even harder to protect your engine against that.
They don't have to. Even if you do that this is just another thread would still save you.
Sure, in that case threads are not native threads but green threads and you may even intermix code from green thread A (which accesses first region) and from green thread B (which accesses second) but it's still the same old, tried and tested Rust story: two threads share memory and need some mechanisms to ensure it will work,
I don't believe that the other program is required to use mmap to trigger this case: Any write to the mapped file can be reflected immediately in your memory space. This only requires the permissions to access the file in question, which are likely less strict than those of /proc/$pid/mem.