Speed up crypto in Rust by contributing to and using crypto-bench


[tl;dr: This is an invitation to contribute to improving crypto in Rust by improving benchmarks.]

crypto-bench is a framework for measuring the performance of key crypto algorithms across a variety of implementations. The goal of the project is to accelerate the development of crypto libraries in Rust by helping library users and library developers understand crypto performance issues.

As a crypto library developer, I use crypto-bench frequently to make decisions about improvements to the implementation of ring. For example, I use it to measure whether a commit improves or regresses performance, which is particularly helpful when I’m replacing C or assembly language code with Rust code. (The ring project has a policy where, while we favor Rust code over C and assembly language code, we won’t let performance-critical features regress in performance “just” to get the safety features of Rust; we generally require new Rust code to be as fast or faster than the code it replaces.)

As a library developer, I also use it to discover which other libraries have done something better than ring does, so that I can use the same techniques to improve ring. I know the developers of Octavo and other crypto libraries have also used crypto-bench to guide performance improvements.

crypto-bench is also designed to help users of crypto libraries. It can help users decide which library to use because, by design, all the benchmarks are designed to be as close to apples-to-apples comparisons as possible. It can also help users measure the performance of their crypto libraries of choice over time. For example, it makes it easy to verify that an update to the library hasn’t regressed performance.

crypto-bench can also be useful to the rustc and libstd developers, by helping them see areas where the compiler and the standard library need to improve in order for Rust code to be competitive with (or better than) equivalent C and/or assembly-language code. (My ultimate goal is for code in safer languages like Rust to completely replace hand-written assembly and C code.)

The crypto-bench README.md has a table showing which benchmarks have been written and which still need to be written. Note that this table is focused in scope to things I think are the most urgent, but I’m also very open to adding benchmarks of other generally-useful, immediately-useful, and not-obsolete crypto primitives (BLAKE2 and Argon2 come to mind).

It is pretty easy to add new benchmarks to crypto-bench by copying the form of the existing benchmarks. I think it would be great for this project to get more contributors, particularly contributors writing benchmarks for crypto libraries other than ring. (I usually write the ring benchmarks soon after adding the feature to benchmark, although I’ve skipped some like HMAC and random byte generation.)

Also, more generally, it would be great to see some tools for scraping the textual output of cargo bench and transforming it into a format that can be charted and/or tabulated in useful ways.

I would love to see more contributions to this project. Let me know if you have any questions.


This may sound stupid, but having some sample results somewhere would definitely attract more people.


No, that’s not stupid at all! I definitely want to publish results. Here’s some things blocking that:

  • We need to write more code to do the actual benchmarks, so we can start gathering more (and more interesting) data.

  • cargo bench and more specifically ./cargo_all bench in crypto-bench, spit out results in a format that looks like this: https://gist.github.com/briansmith/9b28c6033f3f16626e8bed097da3fbd1. This format is OK for some purposes, but really we need the results to be displayed in tables and charts. For example, for SHA-256 we should have a multi-bar graph where each group is a different input size and each bar in a group is an implementation (ring, rust-crypto, sodiumoxide, etc.). Similarly, if we are running benchmarks over time, it would be great to plot them over time like http://arewefastyet.com/ does. But we don’t have any tools at all for converting cargo bench output into tables and charts. This would be a huge first step to making things easy to publish, and that’s exactly the kind of thing I’m hoping somebody will make.

  • The specifics of the hardware you pick for the benchmarks matter a lot. Also, the operating system version and whether you are running in a VM and many other factors matter a lot for these kinds of benchmarks–more than you think they would. This is why I’ve focused on trying to make it easy to run the benchmarks yourself on your own hardware. I am planning to publish some results for some “standard” hardware like Raspberry Pi 2, Raspberry Pi 3, and some other platforms where I can guarantee a stable benchmarking platform. Also, maybe a Skylake Xeon server.


We should probably make cargo bench optionally output the data in JSON.


There’s actually an issue in the rust-lang/rust or rust-lang/cargo issue tracker for that already.

But, I don’t think that’s a good solution for us. It seems like the Rust team wants to minimize how much effort they need to spend on the cargo test and cargo bench features. That’s why I don’t think cargo bench will get stabilized any time soon.

I think, instead, we should just make a tool that scrapes the cargo bench output and converts it into whatever JSON-based format that the various open source Javascript charting libraries can understand. Actually, it would be cool if the conversion from text -> JSON could be done in Javascript too, because then we could make a single-page web app that takes the output as text and then creates graphs.


Another approach could be to fork https://crates.io/crates/rustc-test and change the output formatting there.


Nice! I would definitely be open to using that. Do I understand correctly that it also works with Rust Stable and Rust Beta? I just need to add a dev-dependency on rustc-test to my Cargo.toml and then everything will just work when I run “cargo bench”? (This would be good to add to the rustc-test documentation.)

Still, if you look at the example input, one has to use regular expressions or similar anyway to filter out stuff and to extract the library names. So, I’m not sure switching to JSON output buys a lot and actually might make things worse. (Keep in mind that not all of these crypto libraries can be linked into one executable because they sometimes define clashing symbols.)


Yes it works on Beta and Stable, and yes a dev-dependency will make Cargo “hide” the test crate distributed with rustc. (Since the crate name in the rustc-test package is also test.)



Since I posted the initial invitation, I got a pull request adding sodiumoxide to crypto-bench from John Heitmann, which got merged as https://github.com/briansmith/crypto-bench/commit/2ba56bc5b2262bb2f07ec38905b3244357f39a3d. Thanks. Awesome work!

Also since then, I’ve added X25519 and Ed25519 signing benchmarks for ring and rust-crypto, which may be useful for people who are interested in writing X25519 and Ed25519 benchmarks for other crypto libraries.


You might be interested in this to extract /transform data in json from the CLI https://stedolan.github.io/jq/.