Yes, I think it makes sense to worry about CVE, especially if you (transitively) depend on a lot of less-well-tested crates. Even well-tested crates like smallvec or std get soundness bugs sometimes. A practical tool here is cargo-audit: https://github.com/RustSec/rustsec.
It also makes sense to worry about culture of using unsafe. Rust's first-class support for user-defined unsafe code is probably its biggest technical achievement (lifetime system being just a way to make this system expressive enough). However, the practical value we get out of unsafe depends not only on the language-level mechanisms, but on social practice of using unsafe the right way:
- when there's no acceptable safe alternative
- with understanding of the safe/unsafe boundary (unsafe trait vs unsafe method, etc)
- with understanding of the Rust memory model (
uninitialized::<u8>() is not just some random byte)
- with appropriate testing (miri)
- with clear communication (this crate uses unsafe, please audit)
So far, we've been doing OK-ish in this respect. But, as more and more people join Rust, we can never really finish this work -- new crate authors should be educated about nuances of unsafe without burning them out (this is where we are not so great). That's hard.
For CVEs specifically, it also important to note that Rust gets CVEs for unsound APIs, which I believe generally doesn't happen for C or C++. That is, if you can call a function in such a way that it'll do a double free, you immediately get a CVE in Rust. In C++, you get a CVE only if someone actually calls the function the wrong way. For example, there's
int isalnum(int c) function in C's standard library. Calling it with an
int which isn't
char is UB (example). That would be CVE in Rust. In C, that's documented API, and the CVE can only be created for the caller of this function.
How to talk about safety is a difficult question. I guess, it's best to explain what you know, and why you know it?
Here's a list of things I know about the topic:
There are two approaches to memory safety: guaranteeing the absence of UB or catching some fraction of UB. Popular C++ mitigations are of the second kind, while Rust does the first kind. For example, the revised C++ core lifetimes proposal aims at "diagnosing many common errors", not at "preventing all errors".
It is possible to guarantee absence of UB in C, but that is costly. You either need to formally prove the absence of UB (a-la write haskell code that generates correct C) or exhaustively check every branch in the binary (SQLite approach). Notably, it's too costly -- SQLite plugis don't subject plugins to such a rigorous testing, they had vulns in plugins.
C++-style prevention (ASAN & friends) empirically is not enough for Google, Apple, Microsoft, Mozilla, which independently report between 50-80 % of vulns are due to memory unsafety. I think it's possible to do better, but, on average, code is probably not better than Google's code.
I don't know of a strong empirical evidence that Rusts does prevent UB in practice, but, because of the nature of Rust's approach (prevention & guarantee rather than spot-checking), I would be surprised to see an evidence to the contrary.
It's true that, to the first approximation, unsafe Rust is as unsafe as C++. It's easy to measure empirically that the fraction of unsafe code is tiny. We have formal proofs that unsafe boundaries are possible, so it seems unlikely that a silver of unsafe contaminates all the code.
Here's a list of things I don't know about the topic:
What is memory safety, actually? I can wave hands about the absence of bad behaviors, but that's not a definition. I can say, exactly, that memory safety is type safety, waving the brick-wall book furiously and mumbling something about progress, preservation, and not getting stuck, but I don't think that's super-useful definition for non-academic discourse.
Is Ada as good as Rust? Ada definitely has more safety features than C++, and it seems that non-allocating version of Ada is memory safe. What about heap allocations? What about this example:
let mut opt = Some(92);
let r: &i32 = opt.as_ref().unwrap();
opt = None;
Is spatial memory safety enough? It's much easier to bounds-check than to verify RAII, but buffer overruns are by far the biggest problem. Can it be that Zig, while not guaranteeing memory safety, makes UB rare enough to be not that important?