Are Rust crates secure?

This is exactly it, but, some effort on the standard guidelines could give at least some level of security when using the open source tools. It's like open-source security if you will. The classification of security goes something like:
High) safe from most government attacks
Medium) safe from experienced hackers working mostly alone
low) safe from script kiddies.

Rust can, and should, have an open source Medium level of security. It does not at the moment based on the haphazard crates implementation. Hence why this is also a good idea: Maybe it's time for a "crates-team"?

The point is that a huge amount of C/C++ code vulnerabilities occur because of memory errors in the code. While some operating system buffer overflow mitigations exist, many low-power devices run without an OS. I'm just not going to run an MPC control system on an OS because the feedback loop is going to become unstable due to the indeterminate latency and my drone is going to fall out of the sky.

So now I need some means to ensure someone doesn't target the logical level of the code by torpedoing the security of cargo/xargo dependencies. I can mitigate buffer overflows by using the super-duper NLL coolness of Rust, only to slam into a wall because I can't use most of the dependencies since I'd rather implement my own control system code than leverage a potentially insecure crate that I'd have to audit. If there exists a preference list of crates that followed the api guidelines, and those guidelines provides for a security level by vetting contributors of that specific level of crates, it would be fine. - I'd get Medium level security without having to spend all my startup capital on the security audit.

1 Like

I actually specifically meant Peter Bertok's questions about trusting crates.

Is that something a programming language, especially a low-level one, should aspire to? I don't see how it's possible to make any Turing-complete language "secure", outside of running it in a completely sandboxed environment. (E.g. blockchain smart contracts could theoretically be "secure", but since those tend to use something like Ethereum's "gas" to ensure termination, arguably they're not Turing-complete.)

2 Likes

Depends on what you mean by secure.

I mean the Medium level of security here. Medium level is certainly possible since you would not need to spend a large amount of effort to ensure it. A list of crates would be required with two levels of security 1) Normal 2) Medium. Normal would allow code merges made from any Github account. Medium security applies to crates what are more broadly used, as well as std-lib. Here only a whitelist of long-time coders can automatically contribute, and newer contributors would be more carefully screened. Also, long-time coders would need to check that a trusted account does not get hacked and used to submit malicious code. This along with this: https://internals.rust-lang.org/t/requiring-2fa-to-publish-to-crates-io/7931

Don't see security as "we need to build Fort Knox". That's not how 99.999% of hacks occur. They happen because someone forgot to lock the front door. I don't see Rust crates as even having locks on the doors for the most part right now.

2 Likes

I believe this discussion is mostly redundant to the discussion occurring on the internals forum here: https://internals.rust-lang.org/t/requiring-2fa-to-publish-to-crates-io/7931?u=gbutler.

I believe it would be most productive to end this thread and move all discussion of this issue to that thread. This is covering a lot of the same ground and re-hashing a lot of the same or similar points.

2 Likes

I concur with @gbutler69 that this thread content would be better placed on the internals forum. However I do not find the issues raised by @DrizztVD1 on this thread to be generally redundant to the 2FA discussion. Perhaps that is because DrizztVD1 and I have had similar work experiences relative to determining and protecting attack surfaces :flushed:.

1 Like

Yes, I would agree that "redundant" is a poor choice of wording on my part. I didn't intend to diminish the issues being raised by anyone on this thread, rather, I felt that coalescing the discussion would be useful.

80% of a security consultant's report consists of copy-pasting mitigations from previous reports. It's that old thing of preventing 80% of the damage by finding solutions to 20% of the problems.

Thanks for the heads-up, but that thread is dedicated to 2FA and not the overall discussion of security, so it would have to branch off to a new topic on the broader principles. Perhaps internals is better though.

Please re-read the posts here again though, saying that the things mentioned here are similar shows that you must have missed a number of things.

Here are some other thoughts I have with respect to security/trust of the Rust ecosystem. What matters is liability. Why? Because you can put a monetary value to it. This is how we economically compare actions and outcomes.

What do I mean by this? Well, think of it this way, if I am building software to sell or give away, I want to minimize my liability for any unforeseen outcomes in either case. Part of the way this is mostly done in the software world is through licenses and EULA's that specifically disclaim liability, warranties, or guarantees of any sort to the degree permitted by law. If I am in a situation where I am actually selling a product (or being contracted to provide work), I generally need to provide some sort of insurance for professional liability, product liability, etc. This insurance costs me different amounts depending on the amount of liability I want the insurance to protect me against versus the perceived risk the insurance company sees.

Now, insurance companies are really good at assessing risk. It's what they do. It's how they butter their bread so-to-speak. If I want to use open-source, closed-source, or self-written software components to build a solution, how is the insurance company to determine risk? They determine risk through audits of processes and procedures where those processes and procedures include certain standards of testing, documentation, security procedures, code auditing, etc., etc. The only way for an insurance company to get the assurance they want would be by having an insurance approved organization responsible for vetting/curating something like crates.io to whatever industry-approved standards would make them comfortable in assessing and valuing the risk.

For that reason, I think the discussion of these things has to move beyond specific technical issues and move more into the realm of what kind of "organization" needs to exist, how it would be funded, how it would be staffed, how it would coordinated and comply with industry approved standards and auditing processes approved and/or recognized by the insurance and auditing industries. Just talking about specific technical aspects of things like buffer over-flows, unsafe code guidelines, 2FA, crate signing, etc., while useful, won't get you anywhere you need to be to have meaningful "trust" (meaning the insurance companies can assess and value the risk to their satisfaction).

These are just some thoughts I have on the issues.

3 Likes

Or it means we have different ideas of what "similar" means.

1 Like

Well, that would be part of the topic we have going here. The best solution may be to take these discussions, and the posts not related to 2FA on the internals thread, and use them to argue for a Security team for Rust as part of the 2019 roadmap. Since most of their work will be on crates, they would also be the Crates team-in-one. That way, patreon and other sponsoring can occur for the money part of the discussion.

The consensus that we need to reach here, however, is whether the security of crates at the moment is such that such a security workgroup is needed.

I don't agree though that buffer overflows are not relevant here. While I have not tested it, it's probable that the borrow-checker will severely limit both local and remote code execution in reasonably-well designed code. Right now C/C++ needs to be excellently designed to meet the same level of security. Hence, the memory safety makes the crate security so much more important, else we throw away a very useful result flowing, once again, from Rusts borrow checker.

Yes, but, I never said anywhere that it wasn't relevant. I specifically said:

That is a long way from saying it isn't relevant. I would kindly ask you to try to avoid mischaracterizing what others have said. It weakens the overall discussion.

1 Like

It is relevant to building trust in Rust's security. Is that mischaracterization?

The premise is that Rust has an underlying reason why, after crates are shown to be secure, individuals using those crates are able to maintain the same level of security as those crates. Memory safety is that reason aka the borrow checker.

I think so.

I think the state of crates currently is "good enough". It would be nice to have a more systemic approach to security involving vetted crates, code reviews, etc., but I think the people resources aren't there. Vetting even 1% of the current crates, even one version of those 1% of crates, is an absolutely immense effort. It also stifles innovation a bit, by preventing uptake of new crates which may have a much better API but which may not yet have been reviewed.

3 Likes

Perhaps this could start as an informal interest group in which those of us with relevant background could participate. I too am new to Rust, and find myself already devoting too much time to these forums (fora?). I've been attempting to get other former colleagues with Medium and High white-hat security experience to look into Rust. A request from me for them to participate in such a group might provide the needed impetus.

4 Likes

Continuing, mostly, here: Security fence for crates - tools and infrastructure - Rust Internals

1 Like

That seems entirely reasonable. However...

I don't see how this statement could possibly be true, especially in the current context of responding to Peter Bertok's questions.

2FA, signing, etc. would not solve these problems, and I don't see any other way to "ensure" the trustworthiness of crates without "a large amount of effort."

My point about Turing-completeness is that full static analysis of program behavior is not possible in principle, and any form of algorithmically-enforced trustworthiness via 2FA, signing, etc. cannot prevent malicious attacks of the kind described by Peter Bertok. In particular, social attacks (e.g. phishing, typo-squatting) would be difficult to prevent, and the hypothetical "crate turns bad" scenarios seem essentially impossible to prevent.

5 Likes