Bug still unresolved since 2015 (cve-rs)?!

Hello,

I found this video by hasard, what do you think?

1 Like

It's a well known issue. That crate made a splash recently for whatever reason, but such a crate is not a new idea.

This comment pretty much sums up the situation.

7 Likes

Thank you for that reassuring comment. :laughing: :pray:

1 Like

I watched the video in case there was something new and interesting on the topic in there.

There wasn't, but here are my notes.

The explanation at about 6 minutes conflates Rust lifetimes ('_) with value scopes. Vec<i32> doesn't have a (Rust) lifetime (the type satisfies Vec<i32>: 'static). The way the borrow checking in the example actually works is that going out of scope is a use of value, and that use conflicts with being borrowed.

At about 12:20 they say "it's a constant, of course it will last forever". They're conflating consts and statics. See also.

15:45... uh I wouldn't call anything dark magic yet. Lifetime basics. (But maybe I'm biased.)

The issue exists even without contravariance, by the way. They don't really explain why it breaks. The blog post does; here it is since it moved.

Anyway, I guess I did all my "out loud" thinking on soundness compiler bugs in general and that one in particular years ago. Is there anything more specific you hoped to discuss?

9 Likes

Thanks for the short summary :pray:

This video made me think about a very important point.

This secure side of Rust is a trap in itself, encouraging us to trust the compiler and forgetting that there are many indirect flaws.

for example, we can mention :

  • Compiler bug
  • Scenarios not covered by borrowing rules (we can't foresee everything, we're still human beings)
  • Functional flaws (more business-related)
  • Memory leak/corruption due to incorrect use of "Unsafe"
  • The use of Crates, which internally uses Unsafe code
  • The famous "build.rs" of external crates and their potential security risk
  • Crates that bridge between Rust and C/C++ libraries
  • The use of Macros that add code or modify it (if these Macros inject unsafe code ?!)
  • The famous "Unwrap()" for lazy people, which increases the risk of crash
  • and above all, there is a lot of Unsafe code in the standard Rust library (e.g. std::any::Any).

Since the majority of projects depend on external crates, and each crate depends on other crates, if a single crate embeds Unsafe code, all the advantages of Rust are indirectly compromised.

in the end, the safe side is a chimera; I'd say that Rust is the safest of the compiled languages, but not "safe", it's more reasonable. :thinking:

* a programming language is often linked to its ecosystem (libraries, tools, etc.), so if the language allows risky code to be intentionally introduced into libraries created with it, automatically the whole ecosystem becomes corrupted.
We have a responsibility to ensure that the Rust ecosystem is as safe as possible and to ensure that crates do not abuse "unsafe{}"

I see Rust as an airplane, which offers either automatic or manual piloting, when you switch to manual it becomes C. :laughing:

2 Likes

The way I would describe the situation is that Rust is "practical". In practice, it allows us to write more reliable software that is significantly more likely to do what we want (but with high performance) and doesn't fail in mysterious ways that are difficult to debug.

8 Likes

I share your opinion, Rust is the best language at the moment, forcing developers to code well and properly manage memory, errors and concurrent accesses.

but it is reasonable to always keep in mind that Zero risk does not exist.

1 Like

I generally agree with your bullet points. Program correctness and vulnerability is a multi-layered beast, and also generally undecidable. Rust never said it would protect you from logic bugs either; nothing can. It doesn't and won't solve everything; nothing can. It still can be much improved, and people are working on it (build sandboxing, say).

That's not an uncommon reaction to finding out that you are, after all, relying on unsafe code.[1] But when yielded well, unsafe is actually a great boon, as it encapsulates the dangerous, UB-causing portions. This allows the safe layer above where you can write code that compiles down in a C-like manner without having to be extremely careful. I feel Rust has been around long enough now to show that this is, in fact, a great benefit.

The ideal is "if UB happened, it's the fault of unsafe somewhere; if you have none, it's in a (buggy) dependency". Compiler soundness bugs are exceptions to this ideal. Yep, they suck; yep, the more that are squashed, the better. But it doesn't mean it's all for not. This compiler bug is easy to demonstrate, for example, but I don't think it's terribly common to accidentally run into.

We're close enough to the ideal I feel fine using stating it when someone asks "is this sound?" in post here with no unsafe, say. After all, any UB still isn't your fault. It's hard for me to overstate how much of an improvement that is.

I think it's an accomplishment future languages will learn from.

There generally is cultural pressure against blatant UB and unsoundness. This is an oft-cited example.


  1. There's a lot more unsafe than just Any too. Sometimes it's for performance reasons. But other times it's because it's simply necessary. The foundation of every language is unsafe; it has to be in order to interact with the operating system, deal with memory allocation, define synchronization primitives, etc. ↩ī¸Ž

4 Likes

I look at "safety" as defined in the Rust world like this:

There is a long, narrow and winding road high up on a cliff top and dangerously close to the edge. With its tight turns and steep inclines it is a very dangerous to traverse especially at night or in bad weather. Drop your attention, misjudge your speed or suffer a mechanical failure and one could find oneself plunging hundreds of feet down to the sea below.

After decades of accidents and many fatalities the local authorities decided this situation was bad for tourism and expensive to clean up after all the time, so they fitted the road with a safety barrier. Which hopefully would catch any careless drivers and save them from certain death. This worked really well, lives were saved, money was saved, more visitors came, trade flow increased and boosted the economy.

But still there were some who hated that safety barrier. They would say:
That safety barrier stops me driving off the road where I want to.
I can't take my short cuts anymore.
I know how to drive, I don't need no stupid safety barrier. People should just be more careful up there.
Safety barriers are expensive and slow people down.

And there is the anti-barrier's ultimate argument: Look, up there, high up where nobody goes unless they really have to, there is a 10 meter gap in the barrier. That is not safe. See, the whole safety barrier is pointless.

I'm sure readers can see the analogies here to C/C++ and Rust and a certain class of uber programmer that likes to hate on Rust. As we see all over the net. Sure we have "unsafe", sure there may be bugs in the Rust safety system. That does negate the value of system as a whole.

7 Likes

Not sure what you mean by this, but the ownership/borrowing rules have been formally proven sound.

This is excessive, not all advantages of Rust are lost if a crate uses unsafe. Using unsafe does mean the possibility of UB, but if that ever happens you know where you have to look. Also, once you prove that that specific use of unsafe is actually safe you can go on using the safe interface you built on top of it ignoring everything that's under the hood. The actual advantage of Rust is not removing every possible source of UB, but allowing to reducing their scope so that local reasoning of their soundness is possible.

Other flaws you mentioned are possible, but so are in other languages. It's impossible to remove every single source of bugs or unwanted behaviour, and even trying to do will lead to languages requiring the programmer to formally prove that their code is correct, which is hard, slow and painful. You need some level of pragmatism in the end.

12 Likes

It is not true that rust's ownership/borrowing rules have been proved formally sound. If they had been, this cve wouldn't be present (or it would have been fixed long ago).

To quote your link, "In this paper, we give the first formal (and machine-checked) safety proof for a language representing a realistic subset of Rust" -- note "realistic subset", not all of Rust (in particular, not the bit that includes this CVE).

1 Like

This bug has nothing to do with ownership/borrowing rules. It is in the code responsible for subtyping. You can technically have Rust without subtyping, but you're right it should be part of a "realistic" subset of Rust.

This doesn't mean that subtyping is unsound by itself. It is the implementation that's incorrect. There are a bunch of "obvious" fixes, but they all break lot of existing (sound) code, which is very unfortunate. The proper fix that would keep that code working requires a large refactor of the trait system, which takes lot of time.

3 Likes

Unsafe exists for a reason, its meant to be used when it needs to be used. I get beeing wary of it, I sure am as I currently don't have the knowledge to judge wether some unsafe code is fine or not, but we really don't need another Actix situation.

2 Likes

I think Rust 2.0 should be a break with the 1.x series, and try to propose a new way of doing things instead of finding workarounds or trying to solve problems by trying not to break a syntax that has reached its limits.

Of course it has a reason for being is to get around the limits of language, but is it inevitably the only way? (it's an open door to all kinds of abuse)

There is already a proposed "new way" that would fix the issue, which is not a workaround and is compatible with most existing valid code, but it needs to be properly implemented and that takes lot of time and effort. A new release will likely take the same amount of time if not more. If your problem is with this bug not being fixed until now I don't see what benefit you would get from waiting even more.

What do you propose then? This is not the "only way" but AFAIK all the other available options have trade-offs that Rust decided not to make.

Personally I would love to see a language like Rust but with dependent types and where the current usages of unsafe are replaced with user-provided proofs of safety/correctness, but this is just a dream (and likely won't be as ergonomic to use).

5 Likes

The way I see it, rust safety doesn't necessarily come from how the tools it provides stop you from making mistakes, but of how they force you to be explicit about it.

Besides, even in the safest language possible, you still need to be careful with your dependencies, there's no way around that. At least this way, as far as unsafe is concerned, you know what to look for, better to have to figure out if a few unsafe blocks are sound than to have to look for needles in the haystack that is the entire code base.

Of course, if someone came up with a better way to do it, great ! But that aint gonna be me :stuck_out_tongue: so I work with what we have now, which in my opinion is good enough.

Rust can probably only survive 0..=2 major breaks, so any attempt is going to really have to count. Like, include a number of things that cannot be solved without a major breaking change, including big design areas like "what would make async ideal", dyn-safe GATs, and so on.[1]

I have my own Rust 2 wishlist (probably a lot of people do), but I don't know that it will ever happen. How far the teams are already willing to push editions worries me, because if they push too hard they'll make a (hopefully temporary) ecosystem split which will effectively squander the possibility of a survivable major version bump that really counts.

Some feel the number of survivable breaks is 0 and there will never be a 2.0.

I feel Rust-inspired languages are inevitable at this point, but who knows if any will gain traction.

There are other ways. Look at any managed language. They still have unsafe implementations, but don't let programmers of the language get their hands directly on it.

But Rust also wanted to be a system language on the level of programmers doing FFI, directly interacting with the OS, and so on. Your OS/hardware/whatever aren't "safe". So for that, you need programmer accessible unsafe in some form.

The most common form is "everything is unsafe". However with Rust, you can pretty much have both by only allowing unsafe in libraries you trust, and forbidding it above that. Then the language + trusted-low-level-libraries become the "managed language" you build in.

I still don't know how to convey how much of an improvement this is. Here's a decent paper on the topic, discussing safe abstractions and expert review of unsafe code. They give Rust examples, but they also talk about how one can apply the safe/unsafe distinction in different domains and in any language (enforced by things like code review and policy):

To address this issue at scale and with high assurance, Google applied Safe Coding to the domain of injection vulnerabilities. This was unequivocally successful and resulted in a very significant reduction, and in some cases complete elimination of XSS vulnerabilities. For example, before 2012, web frontendslike GMail often had a few dozen XSS per year; after refactoring code to conform to Safe Coding requirements, defect rates have dropped to near zero. The Google Photos web frontend (which has been developed from the start on a web application framework that comprehensively applies Safe Coding) has had zero reported XSS vulnerabilities in its entire history.

This should be, and as far as I know is, the actual model of most Rust projects: safe code on top of a trusted foundation. The safe/unsafe boundary being part of the language is a critical reason why. If you don't use unsafe, then on any typical day, you just don't have to even think about an entire class of problems. They much more rarely exist, and when they do, it's not your fault.

Even in crates that allow unsafe, the boundary encodes the "safe coding" concept.

Do you mean a malicious upstream? A purely unsafe-free, soundness-bug-free ecosystem will only help you so much there. You can exfiltrate data, download executables, run a bitcoin mine, and delete the production database with perfectly sound code.

I'm not saying the additional avenues opened by unsafe and soundness bugs are nothing. An attacker could more sneakily craft in an exploitable memory safety bug, say. But I don't think you'll actually be any better off if you remove them from an upstream attacker's toolbox, either; they'll just use different tools.

If you just meant "people using unsafe who don't know what they're doing", I agree that's a thing. I have even been that person. Education and cultural pressure and policies in your own code base are ways of keeping it in check.


  1. As was pointed out, the OP issue has non-major-breaking fix in the works. ↩ī¸Ž

7 Likes

I think Rust would not have grown/succeeded without allowing everyone who needs them to create useful abstractions, implemented with unsafe. The single ownership model and static ownership checking are so restrictive that it is not practical to provide everything one might need (that must be implemented with unsafe due to this model) in the std library. Unsafe must be available for everyone so these abstractions can be created and evolve.

1 Like

"unsafe" is required no matter what. It is impossible to get any input or output without "unsafe". Without I/O programs are useless.

To perform I/O one needs to access physical hardware, which the compiler knows nothing about and can prove nothing about. Or one needs to make calls into the OS or FFI, with are also uncontrolled by the compiler.

So "unsafe" is not about the limits of the language it's about the fact that the compiler does not know about anything outside of your program. Like all the rest of the universe.

5 Likes

Something to mention for OP is that certain things are considered "outside the model" and will always be unsafe without a good way to prevent them, despite being things we very much want to be able to do. The most notable one is /proc/mem/self on Linux: a special "file" which holds the contents of process memory. You can read/write that "file" with normal file IO, and it will change your process memory. This breaks any "safe" language on Linux.

If you want an actually "safe" way to run untrusted or even semi-trusted code, you need some sort of sandbox. Virtualization/containerization is the usual way; wasm (with no or scoped wasi capabilities) is an emerging option.

5 Likes