My question now.
I’m wondering, since Rust seems to promise safe memory usage patterns, through its concept of memory owners (don’t remember the official name), and since the Intel bug seems to come from shared memory if I understand somehow, would the bug have happened in Linux if Linux was entirely written in Rust (and no usage of unsafe Rust stuff, at least in the parts relevant to this bug) ?
The new attacks are side-channel timing attacks that target the speculative-execution components of modern CPUs. Because they are side-channel attacks (i.e., from outside the program flow), they are independent of the language in which the host operating system is written. One would have to control the attacker’s coding tools (which is always impossible) to prevent these specific attacks against existing hardware flaws.
Depending on the processor family, there may be software and/or firmware mitigations that the operating system can employ. For example, ARM has developed such mitigations ([https://developer.arm.com/support/security-update]) for those of its processors that it considers at risk.
llvm has patches to limit Spectre so I assume Rust will get that for free once we upgrade to the relevant llvm release.
I’m curious though how much Rust will be impacted by the performance of those patches
When using these patches on statically linked applications, especially
C++ applications, you should expect to see a much more dramatic
performance hit. For microbenchmarks that are switch, indirect-, or
virtual-call heavy we have seen overheads ranging from 10% to 50%.
However, real-world workloads exhibit substantially lower performance
impact. Notably, techniques such as PGO and ThinLTO dramatically reduce
the impact of hot indirect calls (by speculatively promoting them to
direct calls) and allow optimized search trees to be used to lower
switches. If you need to deploy these techniques in C++ applications, we strongly recommend that you ensure all hot call targets are statically
linked (avoiding PLT indirection) and use both PGO and ThinLTO. Well
tuned servers using all of these techniques saw 5% - 10% overhead from
the use of retpoline.
Let’s be careful with spreading “Rust could have prevented this” myths for every major vulnerability. It doesn’t improve our credibility, and it’s more than pointed enough to do it when it obviously applies.
In this case, Redox is just as vulnerable if it uses the intended (and up to now preferable) way to protect kernel memory from unprivileged processes, and it would need to implement just the same mitigation.
Sorry if my question induced this, it was really not the point. My conclusion is that the culprit is, like for all software, complexity. Intel CPUs became so complex that the complexity is beating them. As the title of this paper says, Hardware is the new software.
That’s doesn’t account for the other CPU vendors with similar vulnerabilities. The underlying cause here is that none of these vendors employed design reviewers whose work history was one of developing ways to bypass security measures, such as select former employees of NSA’s TAO group. There are many ways to use information side-channels to attack hardware.
Sorry, your sentence is a bit long, I have troubles understanding it (not a native English speaker). I understand that design reviewers's work is to bypass security measures, which doesn’t make sense. Could you rephrase your sentence please ?
When reviewing a proposed design, it’s helpful to have people with experience in breaking security protections look over the design to see if there are ways to exploit it. People focussed on speed or features are often not so great at noticing security weaknesses.
@TomP is suggesting that CPU manufacturers have not brought security researchers into their design processes effectively.
As stated, the problem is lower than the language in the HW. So using Rust wouldn’t have helped.
But speaking with a friend (he’s a kernel developer), there were some wild ideas (mostly beer-style talks, not real formed ideas) that it might be possible to flag certain areas of code as „OK to speculate“ and „not OK to speculate“ by special compiler intristics (which would control both compiler and CPU optimisations, just like there are special atomic types that disable some optimisations with memory synchronisation, reordering, etc). Mostly because there’s not much the CPU manufacturers can do about it (short disabling the optimisations, which would make it slow), so they’re likely going to document it as a feature and say it’s the compiler’s/programmer’s job to do it correctly.
My point there was that Rust’s type system might provide much more information to the compiler than C, so maybe they wouldn’t have to be applied manually. Or maybe not in Rust as it is today, but the general ideas might be extended to something similar to how thread safety guarantees work.
Sure, nothing concrete, but I believe if such needs arise, it could be solvable.
It is possible to design hardware for existing ISAs that supports more thread parallelism, thereby reducing or eliminating the need for speculative execution and deep cache hierarchies. I gave one example of such an approach in a related thread in the internals forum. Rust’s ease of creating and managing lightweight threads makes it an appropriate language for such architectures.
I queried a colleague about my above-cited post. He has decades of experience in devising side-channel attacks as well as in designing systems that are more difficult to attack via side-channels. Among other things, he’s designed a number of FAA-certified flight-critical subsystems that are found on Boeing and Airbus commercial aircraft.
He’s the one who raised the suggestion of barrel processors. His reply, which he permitted me to post, was
These bugs can be exploited by rogue code, which would surely mark itself “okay to speculate”.
This is what I argued too. But supposedly, the situation is like this:
Meltdown is fixable.
But let’s see what the CPU manufacturers and kernel developers come up with first, designing languages to specially cope with this is a bit premature :-).
I personally do not have faith that every developer of a library routine that I might use will voluntarily give up performance in favor of security. I imagine that most developers would mark everything “OK to speculate” during development, and many would not revisit every point in their code where a security analysis was needed before they release the software. At least in my experience, schedule and performance trump safety and security in most non-life-critical code.
Re barrel processors, in case it’s not obvious, a typical hyperthreaded CPU is a 2-slot barrel processor.