Intel CPU bug : what about Rust?


Aren’t barrel processors supposed to switch context on every instruction, whereas Intel’s hyperthreads switch whenever the implementation feels like it is a good idea (cache miss, pipeline bubble, etc) ?


I personally do not have faith that every developer of a library routine that I might use will voluntarily give up performance in favor of security.

Similarly, many people don’t have faith in programmers to do cryptography correctly or to handle thread synchronisation correctly. That’s what I’m trying to say ‒ if the (some hypothetical language, probably inspired by Rust in some ways) could deduce that for you correctly and allowed you to do leak-unsafe { } to handle the rest of cases when you’re the expert, this might work. Dumping it on the head of a C developer wouldn’t.

But, most of the applications don’t run a foreign code. Just the browser, kernel, word procesor with macros, …

I’m a bit wary of saying the barrel processors will help. For one, having N times more (with 50 ticks of pipeline length and 3 executed instructions per cycle, N is 150), the cache now feels much smaller. And also, most of the software now is written for fast single-thread execution and it sounds like a lot as a game changer. It’s not easy (in some tasks impossible) to use 2400 threads reasonably.


(emphasis mine)

So only:

  • browser: The most commonly used application-type worldwide
  • Kernel: an essential element for any computer above “toaster-intellegence”
  • word processor: Software that is installed on almost every private pc and virtually 100% of business PCs.

The only remaining ubiquitous vector that habitually executes untrusted code that I can think of is “video/audio codecs”…

So even if that isn’t “most programs”, by count, it still means that effectively any computer on the planet still has at least one, and probably all of them, available as attack surface… doesn’t seem to narrow it down enough to warrant writing “just”…

I’ve said it before, when talking about security, there rarely is a “just”.
The internet genie is out of the bottle, and with it untrusted executables everywhere…
JavaScript Coinminers in advertisements, adware sourceforge downloads, keygens for games, “games” from app stores, malware posing as games from app stores, virus-infected clones of popular chat apps, “your_amazon_receipt.pdf.exe” email attachments…

The list of vectors is practically infinite, and like use-after-free in C, a single oversight can bring down the entire house of cards…

I don’t mean to be pessimistic, but against that background, I don’t feel that any new, glued-on processor feature, such as opt-out speculation, are going to work. No amount of ducttape patching is going to turn a sieve into a steel bucket… The redesign has to be more fundamental.


Well’ that was actually my point. There are few by count/number of lines, but cover like 95% of the surface. Also, their core teams are usually quite security aware. Only few people need to care and point out the errors in merge requests. Most programmers don’t have to care or have the knowledge to do it right.

I don’t say we are saved or whatever. I was just thinking about specific fix for specific problem. That can be done wrong, but my point is, it at least can be done.

The redesign needs to happen on completely different level. We have insecurity everywhere, because it’s economically infiesible to build secure software. It doesn’t give a competitive advantage big enough to warrant the effort.


Are you actually in a position to know the resume of everyone employed in security at every CPU vendor? If you’re not, please don’t present speculation as fact, it’s unhelpful.

(edit: apparently you are, so that’s great, I’m glad you’re here.)

My expectation would be that they actually do employ people with relevant security expertise, but they didn’t join the dots to identify this as exploitable, or as a sufficient risk to give up the performance gains. It’s not like specialists never miss things.


I generalized from my years of direct work experience in the subject area, developing such attacks, developing anticipatory countermeasures, and doing bleeding-edge gated-clock VLSI chip design. Of course I’m under NDAs about that work. Developing side-channel attacks is a rather arcane field.


If you look at the paragraph from my post that you quoted, you’ll note that my comment was about a specific class of security reviewers who usually work for the intelligence communities of various governments. It was not about security researchers in general. From my own experience, and in the opinion of some of my former colleagues in that area whom I consulted before posting, it is clear to me/us that people with those skills did not review the branch prediction logic in any of these architectures.

I have no desire to start a flame war. Thank you for your advice. I’ll weigh it against my 49 years of experience designing ISAs and computers.