How does differ from npm

With the recent (/another) npm disaster, what does do differently to prevent such a scenario?
What does prevent a crates maintainer to distribute the API token to somebody else who then uploads malicious code?


Nothing, I'm afraid.

This particular attack is very tough to prevent, because:

  • there was no hacking/stealing, the package was given voluntarily. The usual protections like 2FA, code signing, etc. are to keep strangers away, but this was a breach of trust, not a breach of systems.

  • the malicious code was smart enough to activate only in a specific scenario, so it was harder to detect.

Previous discussions on the topic:


The only significant difference is that this attack was only present in the minified JS code, but not the source code. The equivalent in Rust would be a binary that does not match the source. only distributes source, so that aspect could not happen.

Ultimately, you are downloading arbitrary code from the internet, with all that entails.


However, source in crates is not formally connected to git repos, where people look at the code. Downloading and inspection of package tarballs is possible, but rather obscure feature, so I wouldn't count on people actually seeing code being compiled.

I hope the project like crev in the thread I've mentioned earlier will help surface the code and give more clarity about what code is being looked at and verified.


Looking at code created by a malicious programmer may not be as helpful as many believe. Ka-Ping Yee did an interesting exercise as part of his thesis. See page 148 of He planted 3 bugs in a 100 line region of a single file and told the reviewers where to look. Some of the reviewers found the "easy" and "medium" bugs, but nobody found the "hard" one. That's after spending an aggregate of 20 reviewer-hours on these 100 lines. And, no, this wasn't some obscure C++ feature; the program was written in a subset of Python.


Vulnerability obfuscation is an orthogonal issue, and using it to dissuade improving transparency of package contents is security nihilism. Having obvious, accessible correspondence between repositories and published crate versions still greatly improves the transparency of packaging and provides the least-surprise behavior of the code most likely to be looked at also being the code one is downloading.

I've been bitten several times (not from a security standpoint, but a debugging standpoint) by the published version on not quite matching up with the expected branch/tag/release on GitHub. For good reason, people are much more likely to inspect a package's code from a familiar crate repository host than from the tarball.

I recognize it's a difficult problem, since doesn't enforce anything about repository location (even though it uses GitHub accounts) or existence of published versions on the repo, but at least something like linking from the version page to the corresponding commit hash page when (a) cargo publish is run in a repo with a clean state and (b) the provider has a known format, e.g., GitHub, GitLab, would be a big ergonomic improvement.


Also remember that git repositories can be force-pushed, so there's no long-term guarantee that published code will match the repo, unless this is continuously audited. You might still want a simple sanity check for normal developers, but bad actors could just hide their actions.


No attempt at security nihilism from me. Even a malicious programmer can get caught via code inspection. My point is simply that detecting purposely planted vulnerabilities is harder than many people think.


You can re-push a particular branch or tag, but it is not feasible to forge a commit hash from arbitrary contents.


Does GitHub guarantee that it will keep that dangling commit hash around? Or do you think that it's enough that you could tell that the published commit disappeared?

No, it will not keep the dangling commit hash around. But if you clicked a link to a particular commit hash, you would either get the original contents of that hash, or a not-found error. You would not be shown something different at different times. Also the Github UI and API lets you see who has signed a particular commit, which it does automatically for you if you squash-merge through the UI.

3 Likes isn't substantially technically different from npm in this case. This makes it all the more important to keep different in terms of social norms.

What I find the most distressing about the latest npm incident is how many people in that community (I believe Gary Bernhardt when he says some of the commenters are other npm module authors) respond that giving update control to a previously unknown person was either OK per se or that it was OK because the recipients of the malware hadn't paid the original module author anything.

Debian has a package manager that's different in the sense that the upstream authors aren't (generally) the ones who upload packages to apt repos. However, as a matter of social norms, at least in my experience of talking with Debian developers and listening to Debian developers talk with each other in social settings, Debian is also very different. There seems to be appropriate awareness of being able to publish software in a manner that results in other people's computers automatically running it being a serious thing.

So if something like this happens with, the positioning in terms of social norms matters, and we need clear signals from the Rust community at large that cavalier treatment of the power to publish updates is not OK (i.e. we should align the social norms closer to Debian than npm when it comes to taking seriously the power to publish updates; to be clear, I'm not advocating for Debian frequency of updates).

In particular, I think as a community, we need to make it clear that:

  • Whether the software you provided to someone required a fee in exchange or not (likely not in the case of, if you no longer wish to provide new versions, it doesn't mean that it's OK to give away the power to provide automated (as in cargo update) updates in cavalier manner.
  • If someone shows up and offers you money to buy your crate and the power to upload new versions to under the same name, you should decline. (Even if you could use the money.)
  • If you no longer wish to maintain a crate you wrote but someone else still needs the crate and would like to take over maintainership, you should accept only if you have previously seen the person participate in the Rust community in a way that suggests they have legitimate goals and aren't seeking the update power of your crate for malware purposes.
  • If in doubt, transfer the crate update control to rust-unofficial.
  • If you can't be bothered to do anything at all, literally do nothing (i.e. it's better not to transfer update control at all than to transfer it in a cavalier manner).

Now, for the third point, nihilists will say that anyone, even if they've published useful crates on their own previously and otherwise participated in the community, could be a secret agent playing a long game, so it's hard to articulate exact criteria for when you should believe that the person you are transferring update control to is wishing to take on the maintenance of a crate for legitimate reasons, but clearly it should not be OK to transfer update control to someone whose track record you know nothing about.

On the side of publishing crates, in order to be considerate to what transitive dependencies you expose to the people who depend on your crate, so you should try to make a common-sense assessment of your dependencies based on effort indicators (code and documentation quality) to rule out low-effort bad actors or on community reputation of the authors. (Obviously, this wouldn't have protected against the npm case at hand, since the original module author, as I understand it, had the track record of having published a ton of legitimate modules, but that didn't imply an appropriate attitude to update control changes.)


I think there should be an easy way to see the exact source code of your dependencies. The easier it is to do, the more people will spend their time looking at the code.

For example, if I run cargo update on my project and I see that one of my dependencies has changed a minor version, I'd like to just quickly look at the diff between current version and the new one. It will take me 5 minutes, but will probably prevent the most obvious exploits. Unfortunately there is no such tool today, so I often don't look at the source code at all, which is obviously bad.

6 Likes has the feature to see the source code as well, e.g. (first crate showed me) rua 0.9.11 -
which is the source the doc was built from and because pulls the crates from it should be good.


Not sure I agree, the way I read these comments is that people are saying this is ok because the original module author was giving this away for free, and that if you were using the code and had wanted additional guarantees you should've paid the original module author.

All your points about community culture still stand of course.

The repository link on goes straight to github/gitlab still, so it can be a bit misleading. Popular libraries and browser extensions seem to targeted more frequently now, fair to say this attack vector isn't going away.
A simple diff viewer for updates and a notice if packages change ownership would be a nice start.
Yet another reason to avoid wildcard dependencies too.


That's a great paper. I'm hopeful the result was so bad, because assessing whether votes are counted accurately is a harder task than spotting rogue crypto-wallet-stealing code.

It'd be nice to see more of underhanded Rust to see best ways bugs can be hidden in Rust. For example, Path::new("root/").join(file_name) is gold for someone trying to steal credentials.

1 Like

I think that line of reasoning is bad, and I hope the Rust community strongly rejects the notion that it's acceptable to let something malicious be sent to people who accepted a gift previously.

This is substantially different from a entitlement to updates after receiving a gift, even updates addressing unintentional but security-relevant bugs. Saying that if you want updates, you need to pay if the author doesn't feel like publishing a new version otherwise is totally different from saying you should have paid for something that was offered without requirement of payment if you want not to be supplied with malware in the future. (It really bothers me that there is even a debate about this.)


I think Rust response to the event should be focused on two main areas:

  1. Develop a "blessed" way to sandbox builds, ideally incorporated into rustup. Frankly when you think about the ease with which dependencies get pulled and size of dependency tree for many relatively small projects, it's a miracle we don't (yet?) have on our hands a huge fiasco with a malicious file somewhere in an upstream crate, or something even more clever and better hidden. Such sandboxing will not protect application users, but at the very least it will protect developers.
  2. Infrastructure for code review and cargo integration, i.e. a build system which will use only crates vetted by reviewing body(ies) selected by you. There was several discussions and proposals here, and I think this event just highlights the importance of this feature. After all it's just a matter of time before we will get a similar event. And if Rust will not have some solution for this problem (at least in development) at that moment, I am afraid it could be a painful blow to Rust reputation.

I guess, we should appreciate a certain underappreciated component of our programming ecosystem.


I just want to verbally agree with @hsivonen's post - this issue is 98% a social problem and 2% a technology problem. As long as is a fairly democratic platform (which it should be - that's what makes the Rust ecosystem great!), we need to establish proper behavior by convention and socialization.