What makes a trustworthy crate maintainer?

Quite recently there was a paper published discussing various threats to the npm package manager. One thing mentioned was the trustworthiness of package maintainers and how package maintainers that are trusted, generally increase the overall security of their packages and those that depend on them.

I’m interested in what the Rust community has to say about what they think has a positive impact on the trustworthiness of a maintainer. For example, I don’t use my name when publishing crates, due to privacy concerns. Though, I can see how that could be interpreted as me not wanting to take full ownership and responsibility of the code I publish and thereby generally make me less trustworthy.

What do you think are good/bad signs of trustworthy package maintainers?


Here’s a few metrics I think are useful:

  1. Reputation in larger community (do other reputable people also consider this person reputable; chain of trust)
  2. Real World Identity (corporate or personal; maintainer has a stake in how their persona is viewed)
  3. Project Management (does maintainer respond to issues, publish patch notes, follow semver)

I’ve tried to compute “trust” automatically for the crates ecosystem based on assumptions that:

  • crate owners trust crate co-owners,
  • crate authors trust authors of their crate’s dependencies,
  • authors that belong to certain groups, like Mozilla or rust-lang developers, are more trusted,
  • and that the more popular a crate is, the more trusted it is in general by the community.

and then spreading that trust page-rank style throughout the graph. The results are here:

Some observations:

  • there’s likely to be a significant difference between what people say/think about trust, and what they do. You might not know authors of Rust’s most popular crates, but you can’t use Rust much without implicitly trusting them.

  • My “trust” analysis above is rather naive. Actual trust is contextual — you’ll think about trust very differently when you write a hobby project vs when you write firmware for a jet fighter.


Trust is indeed contextual. I trust my bank with my money but not with my children. I trust my sister with my children, but not with my money.


Actual trust is contextual — you’ll think about trust very differently when you write a hobby project vs when you write firmware for a jet fighter.

Trust is indeed contextual. I trust my bank with my money but not with my children. I trust my sister with my children, but not with my money.

I do certainly agree that trust is contextual. Maybe I didn’t scope the question correctly, what I meant was things that indicate to you a trustworthy crate-maintainer, assuming you wish to use or implicitly depend on this maintainers code. I wasn’t thinking in specific terms such as safety-critical systems, since I expected some things would apply to most if not all contexts.

These sound a lot more like “reliance”. Which is important but not quite the same thing as explicit trust.

1 Like

I'd like to bump this. I'm designing author (crate maintainer) profile pages for https://lib.rs.

When users are evaluating whether to use a crate or not, I assume they will want to check who is the author of the crate, and decide whether they can "trust" the author (I mean trust here in very a broad sense).

So when deciding "would I run and depend on executable code written by this person", what information do you look for?


Trust, but verify.

I'll bite.

In no particular order:

  • What does their activity look like? (e.g., Are they somewhat responsive to issues/PRs?)
  • What is their policy on including new dependencies? (e.g., Are they vetted before hand? Is the weight of a new dependency balanced against its value?)
  • Do they have 2FA enabled on GitHub?
  • What is their policy with respect to bringing in new maintainers? Or otherwise transferring ownership of the repository to someone else.
  • What is their policy on the use of unsafe?
  • How many other people trust them? And of them, how many are those that I already trust?

I think that's all I've got for now. Some of these may be difficult to incorporate on lib.rs. And in particular, when I say "policy" above, I probably mean "implicit policy" or "de facto policy," since almost nobody explicitly states these things. Instead, you kind of have to discern them by looking at the code, commits and issue tracker. Eventually you get a feel for it.


If you actually verify, don't forget to share your findings via cargo-crev :slight_smile:

1 Like

Also, is there any existing system (online, app/tool, or offline?) that does this sort of thing well?

I mostly look for organizations that they're connected to. If they're connected with an org that I recognize, whether something small-time like transmission or big-time like microsoft, it means that they've got something to lose if they go rogue.


This seems like something that could be incorporated into @dpc’s crev, and in turn used by the different crate directories.

As a member of the Amethyst project and an active participant in Rust gamedev, I’ve closely followed hundreds of game crates and talked directly to most of their respective maintainers. In other words, we’ve established a modicum of mutual trust.

crev as I understand it is focused on reviews, but I wonder if it could also support a more simplified ledger of trust. All I really want to do is add a bunch of repo links to a file that contains all Amethyst-trusted crates. This file would of course link all our dependencies, but it’d also contain other projects we don’t depend on but interact with on a regular basis, such as ggez, iced, Veloren etc.

To me, some big factors when adding a dependency are

  • Do they have a reputation in the community? I'm more inclined to trust you if I've seen you comment on the user forums or write blog posts because I get a feel for your level of professionalism and the quality of your work
  • Has the crate owner already worked on something I believe to be of a high quality (regex, serde, etc.)?
  • If I skim through their code (sometimes I want to know how it works, not just the public API), does the code look like a big tangle of spaghetti?
  • What is their attitude towards soundness and use of unsafe? For an exaggerated example of this, see the drama around actix-web last month.

I guess a big part of my decision is based on perceived code quality, even trying to avoid authors who reach for unsafe unnecessarily is because I don't want my code unnecessarily crashing.

Unfortunately as How to Build an NPM Worm demonstrated, detecting dodgy maintainers isn't as easy as grepping their code for unsafe.


I think it has that? Example: https://github.com/BurntSushi/crev-proofs/blob/master/VylyTuk8CMGqIxgHixWaqfiUn3xZyzOA1wFrQ0sR1As/trust/2019-08.proof.crev

You can also do a thoroughness: none review of something if you trust it despite not actually reviewing it.

1 Like

Here's my attempt at this:


  • It highlights when user is a member of rust-lang org.

  • It shows how long a user has been registered. It's relatively easy to register a fake profile, but you can't fake time.

  • Shows GitHub org memberships. This usually shows who the person is working for, and what orgs trust them.

  • More web of trust from co-ownership of crates.

  • Created crates sorted by most recently updated, so you can see if they are still maintained or abandoned.

  • Co-owned crates sorted by popularity, so you can see what's the biggest crate that user is responsible for.


This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.