Supply chain attack scenarios

I'm trying to enumerate all ways in which using Rust and Cargo dependencies could be dangerous, i.e. lead to execution of malicious code.

We often have discussions about improving security of dependencies, but different people have different risks in mind. The topic is very broad, and no single security solution can solve all the problems.

I think for the sake of such discussions it would be good to have a shared list of all known dangers, so we don't talk past each other: "We need X! But it doesn't prevent Y!". If someone proposes a new solution like "developers should stop viruses by washing their hands!" or "crates-io should be a blockchain!" we could systematically evaluate which attacks that would stop, and which attacks would be better served by a different solution.

My first draft of possible horror scenarios is below. Can you think of other situations that aren't on the list?

I'm interested in a very broad view: deliberate attacks as well as accidental vulnerabilities, from attacks by bored kids to well-motivated criminal enterprises or state agencies.

I don't want to discuss any particular solutions here. If you have a fantastic solution in mind, please don't propose it here. Only list the security issues it solves.

  • infrastructure hacked
    • main API servers breached β€” attacker may bypass authentication and publish any crate as any user
    • XSS vulnerability on β€” attacker may automatically accept invitations or steal login tokens of site's visitors
    • SQL injection β€” might leak GitHub auth tokens (fortunately, the API tokens are hashed quite well)
    • index repository hacked β€” attacker can change checksums, and replace config.json to redirect tarballs to an attacker-controlled mirror
  • rustup infrastructure hacked β€” attacker can distribute malicious version of Rust itself
  • developer's own machine compromised
    • in case it's leaking files, but no arbitrary code execution β€” e.g. path traversal vulnerability may allow attacker to read sensitive user-owned files
    • arbitrary code execution β€” attacker can do whatever they want, including hijacking cargo publish or git push commands
  • developer's own auth token leaked β€” attacker gets hold of auth token that allows publishing crates
  • developer's GitHub account hacked β€” attacker takes control of a dev's account, e.g. by guessing password, using a leaked token or cookie, vulnerability in GitHub itself, etc.
    • when the developer can manage GitHub teams that own crates β€” attacker can add themselves to teams
    • when the hack enables logging in to β€” attacker may make new API tokens, send ownership invites to themselves, kick other owners out
  • attacker taking control of CI that is set up to publish crates β€” attacker may exploit CI configuration (e.g. if it gives secrets to code in pull requests, or runs code of a malicious GitHub Action) or hijack CI account
  • reputable crate that is dependency of other crates under new management
    • legitimate crate given/sold to someone who turns out to be malicious
    • dispute between co-owners ends in an ugly fight β€” e.g. a disgruntled co-owners may decide to sabotage or destroy the project
  • bad actor publishing intentionally malicious crate β€” no exploit, just hoping someone will find it and install it voluntarily
    • with obvious malware β€” in case the code attacks immediately and directly
    • with obfuscated backdoor/hidden malware - uses obfuscated code to hide its intentions, public repo may contain different code
    • delayed attack β€” start by publishing a clean, useful crate, and add malware only after victims start using it
  • typosquatting, bitsquatting, homographs, confusable names β€” malicious crate published under a name that is confusingly similar to other, legitimate crates
  • malicious dependency attacks via:
    • proc-macro
    • regular macro
    • library function
    • static constructors or unmangled symbol overrides that run automatically, possibly before main
    • embedding secrets via include_str!("/etc/passwd") or env!("SECRET_TOKEN") and relying on the executable being published by its owner
  • exploitable vulnerability in a legitimate crate
    • by accident, written by crate authors
      • actually-unsafe use of unsafe features
      • other mistake, such as weak crypto, trusting unchecked user inputs, logic errors, default passwords
      • script downloads source code insecurely (e.g. unencrypted, or remote server is hacked)
    • intentional bug that slipped through a pull request
    • due to miscompilation β€” e.g a soundness bug in Rust, or an invalid optimization in LLVM
    • due to a vulnerabilities in the operating system or non-Rust dependencies
  • machine on which you fetch dependencies has broken TLS (e.g. because of a buggy corporate MITM proxy or because you're the #1 enemy of a state spy agency)

Under "typosquatting":

  • bitsquatting: choosing names that are likely to result from bit flips, like,,, or
1 Like
  • GitHub is hacked and the developer's code is targeted
1 Like

What about if somehow a malicious actor replaces the various sources from which one can download rustc/cargo/rustup with another program?

1 Like

I guess this is just confusable names, but there are some implications for alternate registries that are documented in Registries - The Cargo Book

also lists some crate naming conventions registries need to abide in order to thwart some attacks, such as IDN homograph attack - Wikipedia registries need to do this themselves because
cargo itself is fine with them, so registries need to avoid them.

$ cargo new Π°
Created binary (application) `Π°` package
$ cargo new a
Created binary (application) `a` package
$ ls
a Π°

If a registry fails in this regard, an attacker could provide some amount of cover for a malicious dependency e.g. regeΡ… using Kha (Cyrillic) - Wikipedia

1 Like

I read a post about an exploit in one of the other dependency tools.

  • When multiple registries are in use and crates are resolved by looking for the crates in each registry in some order, then registering a new crate under a name used from a lower-priority registry in the higher-priority registry would override the crate next time they build and it notices that the registry now has a crate of that name.
1 Like
  • MITM/DNS attack, when you think you're downloading from but in fact you aren't
  • using compromised e.g. GitHub Actions in your CI
  • compromised compiler generating compromised code -- I haven't heard of this one recently but AFAIK big companies (and the ones that do government work) build their own build chain from audited sources (using compilers that were audited the same way, enter the recursion).
  • (not really a supply chain attack) crates that use insecure defaults, e.g. passwords, old/naive crypto, or using a HTTP client on the server side which does automatic redirects
1 Like

Similar to several above, but I don't think it's been mentioned specifically:

  • Build script that accesses the network (e.g. to download additional source code):
    • Attacker can MITM the connection if it is unencrypted or unauthenticated.
    • Attacker can hack and take over the server side.
    • Attacker can take over the server if the domain name expires.
    • Attacker can DoS the server.
    • The build might leak information to anyone who can observe the network traffic.

It's already happening: php.internals: Changes to Git commit workflow

1 Like

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.