Are Rust crates secure?

Are these rhetorical questions for which you think there's a good answer in other open source communities but not Rust? Or are they simply open-ended questions about how any open source community can possibly create a trustworthy set of common libraries?

1 Like

It's a question about how Rust itself can elevate to being a highly secure programming environment. The requirements for such is that LLVM vulnerabilities are patched, the compiler is secure from someone doing malicious code injection into compiled binaries, and the Crates are not compromised.

The premise is that if I or anyone else use Rust as is to build an IoT network, can I be sure that thread safety itself will ensure my devices and network is secure? Because, given that Rust is likely to make it harder to run remote exploits targeting the binaries themselves, there would be more of a focus from governments and bad actors around the world in compromising the programming environment to introduce vulnerabilities higher up in the chain.

So, is this being mitigated in the Rust environment specifically, and how can it be further improved?

3 Likes

On the issue of trusting crates on crates.io, how is trusting them different than trusting any set of open-source dependencies that come from anywhere? Say you didn't have crates.io and instead were using libs downloaded from various upstream sources directly. What about if it were all in the standard lib?

I see absolutely no difference in these trust scenarios. In either case, the only way I can truly trust all the dependencies is to audit them and then vendor/lock them down into my build for that version of my build. If I upgrade any of those dependencies to a newer version, I must audit them. Every time. That is the ONLY way to have true trust in ANY scenario whether the dependencies come from standard lib, a repo like crates.io, or from 3rd party downloads directly from upstream. This even applies to commercial/close-source software with the added downside that, in most cases, I don't even have the option to audit the source, I just simply have to trust that because I gave them money, they are doing their job properly and have my best interests at heart (which has been proven time and again to not always be the case unfortunately).

So, what I'm saying, is that way too much is being made of the whole, "I can't trust crates.io" issue. It's pretty much a non-issue if you really consider what "trust" in dependencies is.

That being said, things like adding signing or 2FA to crates.io are worthy of pursuing. Also, the notion of curated crates.io alternatives is not a bad idea as well. In fact, I can imagine a scenario where a company or consortium of companies create a curated alternative to crates.io that you can pay a membership to. This could possibly include certain "guarantees" around testing standards, documentation standards, auditing, chain-of-custody etc. that have had official audits etc. Perhaps adhering to things like FDA regulations or FIPS regulations, etc. My guess is this would be costly (as it should be). Seems like a business opportunity?

5 Likes

So your last paragraph is what this discussion is about. The fact of the matter is, we can say that trusting 95% of the crate authors is fine, after trusting their work from previous audits. But now money has to be spent to verify that the other 5% didn't muck with some small dependency that my updated crate relies on. So how can one ensure that some level of open-source security can be guaranteed for Rust?

Imagine this scenario:
A vulnerability has been discovered in a security drone controller that allows remote login and spying. The drone is used at a multinational R&D facility to patrol the perimeter and verify everyone on the premises signed in at the gate with their ID.

The news articles are reporting that a security researcher says that the LLVM edition of the Rust lang compiler was compromised 5 years ago, when this vulnerability was introduced.

The security researcher says that, because Rust lang says "thread safety" on their website, the contracted programmer for that part of the code assumed that remote buffer overflows would be contained. Unfortunately, the Rust community rested on their laurels, and let the security of their dependencies and compiler dependencies slip, and today Rust can only be used after a thorough audit of every single dependency used. Hence, most companies are using dependencies that are 10 years old and have some otherwise minor, but well-known exploits due to how long the dependencies have been in circulation. This is because half the newest dependencies were shown by Shor e.t. al, to be heavily infiltrated by a foreign government to ensure exploits can be run on Rust-compiled code, which could otherwise have been prevented had they not rested on their laurels.

1 Like

This isn't coming from nowhere either, mind you. I was a security researcher of a reputable firm for a short while. And, a colleague was working next to me in the lab on a mitigation procedure for fixing up a couple of thousand point-of-sale terminals. A third party library had several buffer overflows in it, and this code was compiled long ago and the source code long forgotten. It was programmed in C of course. Some other R&D guys had managed to run a game of tetris on the terminal by programming the chip of a credit card that would run the exploit on the terminal when the card is inserted. They could also make the terminal print a 'payment approved' receipt from the terminal that was totally fake with this exploit.

2 Likes

This is exactly it, but, some effort on the standard guidelines could give at least some level of security when using the open source tools. It's like open-source security if you will. The classification of security goes something like:
High) safe from most government attacks
Medium) safe from experienced hackers working mostly alone
low) safe from script kiddies.

Rust can, and should, have an open source Medium level of security. It does not at the moment based on the haphazard crates implementation. Hence why this is also a good idea: Maybe it's time for a "crates-team"?

The point is that a huge amount of C/C++ code vulnerabilities occur because of memory errors in the code. While some operating system buffer overflow mitigations exist, many low-power devices run without an OS. I'm just not going to run an MPC control system on an OS because the feedback loop is going to become unstable due to the indeterminate latency and my drone is going to fall out of the sky.

So now I need some means to ensure someone doesn't target the logical level of the code by torpedoing the security of cargo/xargo dependencies. I can mitigate buffer overflows by using the super-duper NLL coolness of Rust, only to slam into a wall because I can't use most of the dependencies since I'd rather implement my own control system code than leverage a potentially insecure crate that I'd have to audit. If there exists a preference list of crates that followed the api guidelines, and those guidelines provides for a security level by vetting contributors of that specific level of crates, it would be fine. - I'd get Medium level security without having to spend all my startup capital on the security audit.

1 Like

I actually specifically meant Peter Bertok's questions about trusting crates.

Is that something a programming language, especially a low-level one, should aspire to? I don't see how it's possible to make any Turing-complete language "secure", outside of running it in a completely sandboxed environment. (E.g. blockchain smart contracts could theoretically be "secure", but since those tend to use something like Ethereum's "gas" to ensure termination, arguably they're not Turing-complete.)

2 Likes

Depends on what you mean by secure.

I mean the Medium level of security here. Medium level is certainly possible since you would not need to spend a large amount of effort to ensure it. A list of crates would be required with two levels of security 1) Normal 2) Medium. Normal would allow code merges made from any Github account. Medium security applies to crates what are more broadly used, as well as std-lib. Here only a whitelist of long-time coders can automatically contribute, and newer contributors would be more carefully screened. Also, long-time coders would need to check that a trusted account does not get hacked and used to submit malicious code. This along with this: https://internals.rust-lang.org/t/requiring-2fa-to-publish-to-crates-io/7931

Don't see security as "we need to build Fort Knox". That's not how 99.999% of hacks occur. They happen because someone forgot to lock the front door. I don't see Rust crates as even having locks on the doors for the most part right now.

2 Likes

I believe this discussion is mostly redundant to the discussion occurring on the internals forum here: https://internals.rust-lang.org/t/requiring-2fa-to-publish-to-crates-io/7931?u=gbutler.

I believe it would be most productive to end this thread and move all discussion of this issue to that thread. This is covering a lot of the same ground and re-hashing a lot of the same or similar points.

2 Likes

I concur with @gbutler69 that this thread content would be better placed on the internals forum. However I do not find the issues raised by @DrizztVD1 on this thread to be generally redundant to the 2FA discussion. Perhaps that is because DrizztVD1 and I have had similar work experiences relative to determining and protecting attack surfaces :flushed:.

1 Like

Yes, I would agree that "redundant" is a poor choice of wording on my part. I didn't intend to diminish the issues being raised by anyone on this thread, rather, I felt that coalescing the discussion would be useful.

80% of a security consultant's report consists of copy-pasting mitigations from previous reports. It's that old thing of preventing 80% of the damage by finding solutions to 20% of the problems.

Thanks for the heads-up, but that thread is dedicated to 2FA and not the overall discussion of security, so it would have to branch off to a new topic on the broader principles. Perhaps internals is better though.

Please re-read the posts here again though, saying that the things mentioned here are similar shows that you must have missed a number of things.

Here are some other thoughts I have with respect to security/trust of the Rust ecosystem. What matters is liability. Why? Because you can put a monetary value to it. This is how we economically compare actions and outcomes.

What do I mean by this? Well, think of it this way, if I am building software to sell or give away, I want to minimize my liability for any unforeseen outcomes in either case. Part of the way this is mostly done in the software world is through licenses and EULA's that specifically disclaim liability, warranties, or guarantees of any sort to the degree permitted by law. If I am in a situation where I am actually selling a product (or being contracted to provide work), I generally need to provide some sort of insurance for professional liability, product liability, etc. This insurance costs me different amounts depending on the amount of liability I want the insurance to protect me against versus the perceived risk the insurance company sees.

Now, insurance companies are really good at assessing risk. It's what they do. It's how they butter their bread so-to-speak. If I want to use open-source, closed-source, or self-written software components to build a solution, how is the insurance company to determine risk? They determine risk through audits of processes and procedures where those processes and procedures include certain standards of testing, documentation, security procedures, code auditing, etc., etc. The only way for an insurance company to get the assurance they want would be by having an insurance approved organization responsible for vetting/curating something like crates.io to whatever industry-approved standards would make them comfortable in assessing and valuing the risk.

For that reason, I think the discussion of these things has to move beyond specific technical issues and move more into the realm of what kind of "organization" needs to exist, how it would be funded, how it would be staffed, how it would coordinated and comply with industry approved standards and auditing processes approved and/or recognized by the insurance and auditing industries. Just talking about specific technical aspects of things like buffer over-flows, unsafe code guidelines, 2FA, crate signing, etc., while useful, won't get you anywhere you need to be to have meaningful "trust" (meaning the insurance companies can assess and value the risk to their satisfaction).

These are just some thoughts I have on the issues.

3 Likes

Or it means we have different ideas of what "similar" means.

1 Like

Well, that would be part of the topic we have going here. The best solution may be to take these discussions, and the posts not related to 2FA on the internals thread, and use them to argue for a Security team for Rust as part of the 2019 roadmap. Since most of their work will be on crates, they would also be the Crates team-in-one. That way, patreon and other sponsoring can occur for the money part of the discussion.

The consensus that we need to reach here, however, is whether the security of crates at the moment is such that such a security workgroup is needed.

I don't agree though that buffer overflows are not relevant here. While I have not tested it, it's probable that the borrow-checker will severely limit both local and remote code execution in reasonably-well designed code. Right now C/C++ needs to be excellently designed to meet the same level of security. Hence, the memory safety makes the crate security so much more important, else we throw away a very useful result flowing, once again, from Rusts borrow checker.

Yes, but, I never said anywhere that it wasn't relevant. I specifically said:

That is a long way from saying it isn't relevant. I would kindly ask you to try to avoid mischaracterizing what others have said. It weakens the overall discussion.

1 Like

It is relevant to building trust in Rust's security. Is that mischaracterization?

The premise is that Rust has an underlying reason why, after crates are shown to be secure, individuals using those crates are able to maintain the same level of security as those crates. Memory safety is that reason aka the borrow checker.