Rust CVEs - Should I worry?

It seems most bugs are actually fixed now. It's rather a good signal, we're actively fixing soundness bugs.

You can also find dozen of soundness bugs if you search for "jvm". Does it means "Java does not help with security and Java can't do anything without some C code underneath, therefore Java's safety guarantees are a useless waste of runtime resource that makes the CPU busy for doing worthless tasks and consumes electricity and emit carbon"? I don't think so.

19 Likes

“The Sophisticate: “The world isn’t black and white. No one does pure good or pure bad. It’s all gray. Therefore, no one is better than anyone else.”

The Zetet: “Knowing only gray, you conclude that all grays are the same shade. You mock the simplicity of the two-color view, yet you replace it with a one-color view….”

43 Likes

Do you mean like "zetetic" - proceeding by inquiry ?

Great, I learned a new word today.

Is there a word for one whose world is in glorious high resolution colour and three dimensional?

All in all I'd prefer my dangerous tools to have safety guards and interlocks. The fact that I can remove them or they sometimes fail does not mean they are a waste of time.

1 Like

Do note that often a CVE in Rust is "carefully crafted safe code can trigger UB due to incorrect unsafe code", which in C/C++ that would just be considered an invariant the caller has to guarantee, so no problem, right?

By the way, is UB in a C/C++ library even considered as a CVE? Rust has Rustsec that does that job and takes it pretty seriously. I think there might also be a misrepresentation here.

Undeniably writing unsafe Rust code is harder than normal C/C++ code because it has to handle everything that safe rust code can do, with only limited invariants. On the other hand however it's easier to find unsound code because it must be in an unsafe block, so you can just audit those, while in C/C++ you have to either scan the entire program or wait for the UB to happen.

19 Likes

cc @Shnatsel @bascule

Is there a vulnerability that affects you? Then yes. Should you stop using Rust? No.

I typically have a daily CI build that does cargo deny/cargo audit that checks against the most recent https://rustsec.org/ and I deal with reports individually. So far all problems have been fixed by crate authors rather quickly, the only problem has been that not all fixes have been back-ported so sometimes when it's a dependency of a dependency, the dependency doesn't get updated/released for a couple of months if the dependency is a large project.

3 Likes

Yes, I think it makes sense to worry about CVE, especially if you (transitively) depend on a lot of less-well-tested crates. Even well-tested crates like smallvec or std get soundness bugs sometimes. A practical tool here is cargo-audit: GitHub - RustSec/rustsec: Audit Cargo.lock files for dependencies with security vulnerabilities.

It also makes sense to worry about culture of using unsafe. Rust's first-class support for user-defined unsafe code is probably its biggest technical achievement (lifetime system being just a way to make this system expressive enough). However, the practical value we get out of unsafe depends not only on the language-level mechanisms, but on social practice of using unsafe the right way:

  • when there's no acceptable safe alternative
  • with understanding of the safe/unsafe boundary (unsafe trait vs unsafe method, etc)
  • with understanding of the Rust memory model (uninitialized::<u8>() is not just some random byte)
  • with appropriate testing (miri)
  • with clear communication (this crate uses unsafe, please audit)

So far, we've been doing OK-ish in this respect. But, as more and more people join Rust, we can never really finish this work -- new crate authors should be educated about nuances of unsafe without burning them out (this is where we are not so great). That's hard.

For CVEs specifically, it also important to note that Rust gets CVEs for unsound APIs, which I believe generally doesn't happen for C or C++. That is, if you can call a function in such a way that it'll do a double free, you immediately get a CVE in Rust. In C++, you get a CVE only if someone actually calls the function the wrong way. For example, there's int isalnum(int c) function in C's standard library. Calling it with an int which isn't char is UB (example). That would be CVE in Rust. In C, that's documented API, and the CVE can only be created for the caller of this function.


How to talk about safety is a difficult question. I guess, it's best to explain what you know, and why you know it?

Here's a list of things I know about the topic:

Safety wise, Rust competes with Ada, C, C++, Zig. Managed languages like Java, Go, JavaScript are memory safe enough already.

There are two approaches to memory safety: guaranteeing the absence of UB or catching some fraction of UB. Popular C++ mitigations are of the second kind, while Rust does the first kind. For example, the revised C++ core lifetimes proposal aims at "diagnosing many common errors", not at "preventing all errors".

It is possible to guarantee absence of UB in C, but that is costly. You either need to formally prove the absence of UB (a-la write haskell code that generates correct C) or exhaustively check every branch in the binary (SQLite approach). Notably, it's too costly -- SQLite plugis don't subject plugins to such a rigorous testing, they had vulns in plugins.

C++-style prevention (ASAN & friends) empirically is not enough for Google, Apple, Microsoft, Mozilla, which independently report between 50-80 % of vulns are due to memory unsafety. I think it's possible to do better, but, on average, code is probably not better than Google's code.

I don't know of a strong empirical evidence that Rusts does prevent UB in practice, but, because of the nature of Rust's approach (prevention & guarantee rather than spot-checking), I would be surprised to see an evidence to the contrary.

It's true that, to the first approximation, unsafe Rust is as unsafe as C++. It's easy to measure empirically that the fraction of unsafe code is tiny. We have formal proofs that unsafe boundaries are possible, so it seems unlikely that a silver of unsafe contaminates all the code.

Here's a list of things I don't know about the topic:

What is memory safety, actually? I can wave hands about the absence of bad behaviors, but that's not a definition. I can say, exactly, that memory safety is type safety, waving the brick-wall book furiously and mumbling something about progress, preservation, and not getting stuck, but I don't think that's super-useful definition for non-academic discourse.

Is Ada as good as Rust? Ada definitely has more safety features than C++, and it seems that non-allocating version of Ada is memory safe. What about heap allocations? What about this example:

let mut opt = Some(92);
let r: &i32 = opt.as_ref().unwrap();
opt = None;
println!("{}", *r);

Is spatial memory safety enough? It's much easier to bounds-check than to verify RAII, but buffer overruns are by far the biggest problem. Can it be that Zig, while not guaranteeing memory safety, makes UB rare enough to be not that important?

23 Likes

What is this "revised core lifetimes proposal" of which you speak? You worry me with that. At first sight it sounds like somebody wants to remove some of Rust's memory use checking. Which sounds like something I would rather not see.

A read, a year ago or so, that the Ada folks were working on adopting Rust style lifetime/borrow checking into their language. I have no idea how far along that got.

What is the problem with that? It does not compile:

error[E0506]: cannot assign to `opt` because it is borrowed
  --> src/main.rs:53:5
   |
52 |     let r: &i32 = opt.as_ref().unwrap();
   |                   --- borrow of `opt` occurs here
53 |     opt = None;
   |     ^^^^^^^^^^ assignment to borrowed `opt` occurs here
54 |     println!("{}", *r);
   |                    -- borrow later used here

Looks all well and good to me.

Sorry, I was confusing, revised&linkified the wording.

What is the problem with that? It does not compile:

It's an example (by @pcwalton) where rust lifetime analysis is needed even for heapless subset of the language. I don't know how Ada solves this problem.

3 Likes

When talking about specifically Rust bugs (in Rust itself, not in 3rd party code merely written in Rust)

  • These bugs are mainly in Rust's standard library. Should you stop using libstd? Probably not, because there's no guarantee that you would write it any better. If you wrote such code yourself, these could be your bugs, not "Rust" bugs. libstd at least gets many eyes on it looking for bugs.

  • These bugs are in unsafe code, and are considered bugs because Rust promises very high level of safety. If you switched to C or C++, then you wouldn't have these safety guarantees to break in the first place. There bugs like "may be unsafe if you pass incorrect arguments" don't count as language's bugs, but merely "you're a bad programmer, you should have checked your inputs, and it's your fault" bugs.

  • If your alternative would be to switch to Java, Node, or Python, then check out CVEs of their implementations too. They have "unsafe" code in their VMs too. Theoretically golang is safe and bootstrapped in its own safe language, but it had a few CVEs too.

It's not great that some safety bugs slip through, but I don't think there's much that can be done about it — it's not clear if there's alternative solution that can guarantee it won't have such bugs.

So in practical terms you can "worry" by adding defense in depth. Have tests and run fuzzers on your program. Still sanitize your program's inputs, and have assertions for important invariants. Run it with minimal privileges, sandboxed if possible.

11 Likes

There are bugs in code and applications written in Rust:

  • The CVE list you've quoted is CVEs for all programs tagged as Rust-related (e.g. just written in Rust, not bugs caused by the Rust language). That's unusually singling Rust out. The same list for C is empty, but definitely not because nobody has written a bug using the C language :slight_smile:

  • These CVEs have a varying level of severity. Some are just for panics. Some are application-level incorrectness. Some require very specific incorrect usage of a library to cause unsafety (which in Rust still counts as a bug, but wouldn't be in other languages that don't promise this safety)

  • Rust can't prevent all bugs. It's trying, and helps with a lot of them, but perfection is not realistic.

  • Still, there's a bunch of genuine use-after-free bugs in unsafe code.

Rust (with the safe subset) exists, because we as an industry haven't figured out yet how to write such code safely. Rust still limits unsafe in scope, supports Miri with UB detection, and supports LLVM Sanitizers, Valgrind, fuzzers, etc. We have crev for code reviews.

Can we do better? Could we have static analyzers that look for panic-safety issues? Maybe compiler warnings for calling Deref and other implicit functions inside unsafe blocks? (potentially-evil implementations of these have been a source of theoretical vulnerabilities). Could we have debug modes with more runtime assertions? (e.g. strings that re-check their UTF-8 guarantee)

9 Likes

TL;DR: is Rust perfect? No. Is it dramatically better than the status quo? Yes.

This is a complex topic that deserves its own lengthy article, but I already have a long backlog of things to write. This older post is somewhat relevant:

Another thing is that Rust has a very different culture of CVEs compared to nearly any other language. Any memory corruption is treated as a security issue and gets CVE. This is a good thing, and a rare luxury that we can afford!

For example, in C this kind of detailed reporting is completely infeasible. Memory corruption bugs are so common that filing a CVE for each and every one would completely overwhelm everyone involved. The Linux Kernel, for example, gets thousands of memory corruption bugs per year that are largely ignored - not only do they not get a CVE, they are not even fixed! (source). So that when someone hears a bug is being used to exploit Linux in the wild, the question is not "what novel bug did they find?" but "which of these hundreds of already known exploitable bugs did they use?" (source). And this is not just theoretical - the vulnerabilities are severely affecting real people. (I have expounded upon this in more detail here, but the bulk of that post is a largely unrelated rant.)

16 Likes

This argument is the same as someone running around a house poking a fork into electrical outlets complaining that plastic covers on the outlets are useless because sometimes they manage to get the fork in anyway.

6 Likes

Here's boats's take on why you shouldn't worry about most of them:
https://without.boats/blog/vulnerabilities/

7 Likes

I don't like that CVE list as it seems to indicate problems which it actually does not. If you see what I mean. See below. But it is what I have had brandished at me by those claiming Rust's safety does not work.

Yes, I soon found that searching for keywords "C", "C++" and the like produce no results on that CVE site. Presumably the terms are too short for the search engine to consider. Thus making any comparison with Rust impossible. If such a comparison can be said to make any sense at all.

BIngo. Boats clearly articulates what was niggling me about that CVE list:

I think the habit of applying for CVEs for Rust (and Rust ecosystem libraries) is silly at best and harmful at worst. I think it muddies the waters about what a vulnerability is, and paints an overly negative picture of Rust’s security situation that can only lead people to make inaccurate evaluations when contrasting it with other languages like C/C++.

Anyway, I am totally sold on Rust's type and memory checking. That is what brought me here two years ago and in large part why I still here.

Everyone here has nicely described why memory safety is a good thing. The only remaining question in my mind is how to condense all that into a simple reply to those that point to things like that CVE list in support of their inaccurate evaluation. Something that succinctly, clearly and forcefully makes the point.

1 Like

In Rust, those CVE's represent things that get fixed to ensure all future programs don't have the same flaws.
In C/C++, you get no CVE, you get no warning, and you get no fix until someone has been pwned and/or lost a lot of money. Then the next application to come along has to avoid all the same pitfalls all over again.

5 Likes

That CVE list demonstrates that the rust developers takes security seriously, since they're treating as vulnerabilities library bugs that simply could result in vulnerabilities if used in a buggy manner. In just about any other language the attitude would be "read the docs" or "don't do that."

3 Likes

Reminds me of

In C/C++ something is correct when someone can use it correctly, but in Rust something is correct when someone can't use it incorrectly.

18 Likes

Nice suggestions. I love that quote of the week.

< evil thought>
I wonder what would happen if one went through the C and C++ language standards and started raising CVE's against every instance of Undefined Behaviour or implementation defined behaviour... After all, they cause a lot of security issues, right?
</ evil thought>

11 Likes

I am pretty sure the CVE would probably be raised against C/C++ as a whole, considering that the language itself allows these kinds of things to happen in the first place lol (I know this is an unfair comparison given that one could probably TECHNICALLY write correct and safe code in it, but you get my point). Granted, Rust allows unsafe too, but at the very least it does its very best to contain these sorts of things. It's a LOT better

Anyways, this quote before really drives the point home.

In C/C++ something is correct when someone can use it correctly, but in Rust something is correct when someone can't use it incorrectly.