Announcing the `complexity` crate to calculate cognitive complexity of Rust code

Hey Rustaceans :wave:,

I would like to share a crate I have been working on recently to calculate cognitive complexity of Rust code. It leverages the syn crate for syntax parsing and uses a cognitive complexity algorithm based on Cognitive Complexity by G. Ann Campbell . Suggestions for adapting this algorithm specifically for Rust code are welcome!

Example usage

use complexity::Complexity;
use syn::{Expr, ItemFn, parse_quote};

Complexity of an expression

let expr: Expr = parse_quote! {
    for element in iterable { // +1
        if something {        // +2 (nesting = 1)
assert_eq!(expr.complexity(), 3);

Complexity of a function

let func: ItemFn = parse_quote! {
    fn sum_of_primes(max: u64) -> u64 {
        let mut total = 0;
        'outer: for i in 1..=max {   // +1
            for j in 2..i {          // +2 (nesting = 1)
                if i % j == 0 {      // +3 (nesting = 2)
                    continue 'outer; // +1
            total += i;
assert_eq!(func.complexity(), 7);

See the README for more information:


Docs: complexity - Rust
Repository: GitHub - rossmacarthur/complexity: Calculate cognitive complexity of Rust code


Is this related to this clippy lint or this open issue?

I have seen the clippy lint and the open issue, as well as this comment which talks about why cognitive complexity is really hard to implement generally for all cases (he says a lot more also).

I created this crate for multiple reasons:

  • I was unhappy clippy implementation of cognitive complexity, and the usefulness of it. The algorithm is related but clearly different to this one.
  • I still wanted an implementation of cognitive complexity, and felt that even though G. Ann Campbell's implementation wasn't very academic it could still be useful.
  • I wanted to be able to calculate cognitive complexity arbitrarily not just 'assert functions don't exceed a max', which is not very useful tbh. I have a custom tool that I am using which checks cognitive complexity and makes sure there are unit / integration tests for functions that measure high on this crate's complexity measurement.
1 Like

What are your thoughts on adding a function's generics to the complexity calculation? For example, fn foo<T: Debug>() might get a +1, whereas fn bar<I, S>(items: I) where I: IntoIterator<Item = S>, S: AsRef<str> might get +4 (one each for I, S, I: IntoIterator<Item = S>, and S: AsRef<str>).

In Rust more than any other language I've found making too code generic can have a big impact on cognitive complexity. For example, have a look at the gfx::Resource trait or the std::ops::Add impl for adding two ndarrays. If I wanted to add two different ndarray::Arrays and hit a compile error, it'd be hard to figure out what the problem is due to the the many levels of generics that are going on.


I would add 10 points (or whatever the scoring is) for every use of lifetime ticks.

Add 20 points for any use of async.

Macros go off the scale!



This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.