What about crate injections?

I don’t see any reason for not having catastrophic injections in the Rust ecosystem.

Injections like this one seem even easier in the Rust world than in the node one.

The problem is well known. I know of the crev attempt at a solution but it’s never in the news and doesn’t seem to mobilize people.

Do you see other solutions ? Do you think it’s OK to ignore this problem ?


I still think the two most promising are:

  • crates.io 2-factor auth for publishing, or requiring multiple authors to authorize publication of a new version. This should reduce risk of account hijacking.

  • Shouting louder about cargo-crev and giving more ways to review the code. This seems like the best way to catch malicious actors, without limiting usefulness of Rust and the ecosystem.

I get a feeling that people what something to magically solve the problem for them, but we don’t have easy solutions (or rather, we have easy solutions, but they’re not 100% bulletproof solutions, so they’re dismissed as insufficient).


I’ve just taken a look at cargo-crev, and I don’t see how it plays a role in a practical solution. It seems way over engineered to solve unimportant problems while neglecting to address important ones. It is written to be language agnostic and as a result is both hard to use and looks to be ineffective. In general any security system that is hard to use is ineffective.

Because it is language admitting m agnostic it cannot use an existing identity system such as crates.io users.

Crev goes to great lengths to be very distributed, so that it will be hard to find reviews, while crates.io is centralized, so we don’t get a benefit from the distributed nature, since we still have a single point of failure.

Crev focuses on a web of trust approach which I believe to be pretty definitively found to be ineffective. Most users don’t understand it, and in most (almost all?) cases don’t have any basis for making a trust judgement. A system that forces its users to do the impossible is not going to provide security.

I agree that a code review system is needed, but I think it needs to be centralized so that it can be accessed from crates.io. It should reuse the same identities that we already rely on. And while I agree that trust is a social construct, in practice we trust people because of what they’ve done, and crates.io already has a lot of that information, which I myself do not.


What do you mean here ? The concept is language agnostic but the implemented tool does take Rust into account and lets you review crates, list the dependencies and their reviews and issues, and it does use crates.io.

There’s a lot of engineering especially to make it easier to use. It’s far from perfect IMO but it’s not meant to be used by anybody: it’s for developers who feel able to review code. The really difficult part keeps being reviewing code (reviewing a whole crate is hard).

As for the distributed aspect… this removes the main problem of forcing everybody to trust the same authority and you may very well choose to trust the same few roots than most people, while being more picky if you want (or when you’re in a private organisation organizing its own trust tree).

I think that’s part of the problem. It needs to also be usable by developers who use code. That is the whole point.

Oh, you mean the other side. There are efforts on this part (a web site for example) but it’s won’t be usable unless there are more reviews than what we currently have. Right now you can’t take an application and just casually check it’s been recursively reviewed until the deepest crate.

The problem with a system based on a web of trust is that the users still have to decide who to trust. You can say (correctly) that most people will just trust a standard set of reviewers that some other body creates, but that just makes the web of trust useless (because it’s not being used). It’s still being over engineered, and since as you say, getting people to review chose us hard, it’s not going to work if you make it even harder.

1 Like

I agree some aspects of cargo-crev could be simpler, and I’d love to see better usability and lower friction.

I do like the WoT of it though. If there was centralized solution:

  • you would depend on the centralized team to review your crate. What if they were too busy? Or deemed your crate unimportant?
  • everyone would have to trust that central team. What if there was someone on that team who you didn’t trust?

It’s possible to build a centralized solution on top of WoT solution. For example, crates-io can use reviews from a predefined set of users. But also companies can require their own employees to review crates independently, and trust/distrust reviewers based on importance of each project.

I’m currently struggling to find time to implement this, but I was thinking about adding an easy web based front end for crev: log in with GitHub, say whether piece of code looks ok or not (Tinder for Rust functions), and it would generate all the repos and signatures for you.


A centralized system doesn’t need a centralized team. It could just have a centralized identity system and a centralized database. Those two issues are where crev makes everything very complicated.

An example of a centralized system without a centralized team would be stack exchange. All the reviewing is highly decentralized, but all users have accounts on the single system and enter their rankings into the same database.

And actually the trust question is decentralized in stack exchange in a way that a web of trust doesn’t achieve. Trust is not transitive, it is accumulated based on actions. A web of trust requires me to evaluate each person, both in terms of their output, but also in terms of their ability to make judgements about other people… and their ability to make judgements about other people’s ability to judge other people.


You can use github/gitlab, for hosting your proofs (and as a way to advertise your ID) etc. I’d say it’s even better in certain aspects.

cargo-crev checks digest of each package content so it doesn’t trust crates.io. For all crev cares, crates.io can be evil. :slight_smile:

As soon as some reputable group of people (Rust core team, RustSec team, some bigger player) establish their WoT, most people can just ride on the back of their effort. I do plan to introduce some defaults at some point, so that all users get some starting default, so they don’t have to start “empty handed with no hope”.

I mean, conceptually … as soon as crates.io publishes their “certified”/“advised” WoT, you can have your centralized trust source just fine. Centralized is a sub-set of distributed.

You can do that already with cargo-crev. If you do cargo crev edit know you can edit a list of users/groups from crates.io that you trust. Then when you do cargo crev verify deps --skip-known-owners it will skip packages from them, so you’re left only with those that you might want to look at manually.

I don’t see how would that work. Users? How will users review the code if they usually don’t understand code.

I’m working on that. Differential reviews are already implemented (in master), and I’m working on static binaries through CI pipeline (to help people who can’t compile it or something). I have many other ideas how to increase engagement, etc.

I’d love that. :slight_smile:

The problem with centralized identity systems is that you then have to trust them. For some companies etc. this is a no-go. Many people are actually asking for PGP support. Noone asked for centralized system.

And let’s face it - most developers doesn’t give a damn. Otherwise this problem would have been solved long time ago, independently by many competing teams and projects. It’s not sexy, doesn’t give you lots of recognition etc. And it just takes quite a bit of effort to review code - there’s no way around it. I am actually thinking and planing to improve on that with build in ladders and other gamification techniques.

I have higher hopes for bootstrapping by making cargo-crev more friendly for businesses and organizations, especially ones that treat security seriously, than the general public. Companies might have to do it, so if they only published their reviews, rest of the community can initially piggy-back on it.

crev philosophy here is: redundancy. You should (and actually can already) specify how many reviews do you need to consider something trusted. You can also filter out reviews of too low understanding and thoroughness. WoT traversal is also highly configurable (trust level distances), so you can run verification with different settings back and forth to find packages that are least trustworthy.


The biggest problem with being language-specific is that not a lot of real work is done with one language only. A system that supports only one specific language is dead on arrival, IMO. Especially for a relatively new one like Rust. Maybe NPM or Java ecosystems could pull off something targeting only their own ecosystem, but I can’t see anyone serious investing a lot of effort into a tool/ecosystem that can only review Rust code with at least some hope for extending it to other languages.


As I believe, what follows is state of the art:

  • Instead of running just one server, put the stuff on something like IPFS. This ensures availability.
  • Each crate in its specific version has one cryptographic hash value. A crate is referred to by its hash. This ensures integrity.
  • Different parties can now review the crates and put the results on a list, say a whitelist, a graylist, a blacklist. This ensures authenticity.
  • The reviews of the different parties may be compared automatically. People might be interested in intersections and unions of such reviews.

All convenience is put on top of such a system. Until here, no more complex and fragile public key cryptography is used, but this changes now, as stuff must be signed. Someone needs to publish IPFS addresses, otherwise no one can obtain them. By IPNS one can refer to the newest version of a document. On top of that, one can build a DNS which maps readable names to IPNS addresses.

crev is transport agnostic. You cold have crev’s package review proofs exchanged via special servers, tcp, git, emails, IPFS, and so on. Right now the implementation is using git repositories everywhere. And even with that nothing stops people from using ipfs git remotes with something like https://godoc.org/github.com/cryptix/git-remote-ipfs .

Having said that - my opinion is - you either trying to do something cool, or something that people will actually use. Piling up stacks of cool bleeding edge tech is a great way to make sure solution is totally impractical.

Last time I checked everybody were still hosting their code on github/gitlab/self-hosted git servers, and availability wasn’t a problem. So I’d wait until everbody do it on IPFS before baking it into anything I’m trying to make popular.

This is exactly, roughly, and roughly what crev is doing.

So, yeah. This is what crev is doing, minus IPFS which is a transport layer, which crev is actually agnostic of.


I’d like to make sure everybody understands: the single biggest problem is that reviewing code is laborious and rather ungrateful. Technical issues are secondary. Try using cargo-crev for couple of weeks, actually reviewing stuff, and you’ll quickly see what is the actual problem. :slight_smile:

I have almost 200 commits in my proof repo so I really know something about it by now.


Not really. In general security measures make life harder for users and it's fair to say that there's an inherent tension between security and usability.
For concrete examples just think of authentication measures like passwords and passphrases, 2FA, security checks at the airport, but also authorization measures like Access Control Lists (ACLs) and more general security measures like encryption, which often use one or more of the authentication examples I gave above.

Who exactly would be performing these code reviews? Can a random developer count on code being reviewed? And if so, who's going to provide the resources to allow that to happen?

Clearly humans would be required to do code reviews, just add humans write rust code. Using the crates.io centralized system to authenticate these humans would enable any human who publishes a rust crate to easily share their review of another crate, and the reader if that review would have the same level of confidence in the identity of their reviewers as they have in the authors of their crates.

Of course not. How would you or anyone else be able to force people to do work?

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.