RF signal processing in Rust

Hi everyone!

I wonder if you have any recommendation for crates or frameworks regarding signal processing in Rust (audio and radio frequency), in particular for complex I/Q streams? This would include tasks like:

And possibly things that build on top of it, like:

And/or coding:

I'm willing to do many things on my own, but at least a good FFT library (and maybe a resampling library) might be helpful as a starting point.

Try rustfft.

1 Like

For interfacing RF hardware, I will likely use the soapysdr crate.

I have made mixed experiences with SoapySDR in the past (weird behavior when initializing certain hardware, unclear use of certain API calls, etc.), but I don't think there's many alternatives if you want to be somewhat hardware independent.

Speaking of hardware, what's a good way to access audio in/out hardware (microphone, line-out) from Rust? My focus is Linux/BSD, but a platform independent way would be nice too.

I think you're looking for this:

I've seen this used in my dependencies before, and it makes all the right noises (pun intended) in the readme, but it was just a simple search for audio, so feel free to look further!


I haven't tried it myself yet so I can't attest to its completeness, but FutureSDR is aiming to be a general framework like GNU Radio but in Rust.


Thank you, I will take a look. Writing a short stub, it seems to compile on FreeBSD too. :smiley:

That looks promising, and might be a project to contribute to if/when I implement some blocks myself. I also appreciate that it's licensed with Apache license rather than following GNU's model.

H₂CO₃ mentioned rustfft, and its author has posted here about it before, which you might find interesting:

I didn't look deeper into FutureSDR yet, but I was able to successfully receive radio transmissions (commercial FM broadcast and amateur radio) using these components:

I used tokio to make the overall processing asynchronous. I had to use tokio::task::spawn_blocking to use the blocking interface of soapysdr with tokio's async runtime. Looks something like that:

let dev = soapysdr::Device::new("").unwrap();
dev.set_frequency(Rx, 0, 433.5e6, "").unwrap();
dev.set_sample_rate(Rx, 0, 1024000.0).unwrap();
dev.set_bandwidth(Rx, 0, 1024000.0).unwrap();

let mut rx = dev.rx_stream::<Sample>(&[0]).unwrap();
let mtu = rx.mtu().unwrap();

let (rx_rf_send, rx_rf_recv) = channel::<Sample>(queue);
let join_handle = spawn_blocking(move || {
    let mut buf_pool = ChunkBufPool::<Sample>::new();
    loop {
        let mut buffer = buf_pool.get();
        buffer.resize_with(mtu, Default::default);
        let count = rx.read(&[&mut buffer], 1000000).unwrap();

I then connect several futures (which get spawned) with some asynchronous channels:

let (rx_base_send, rx_base_recv) = channel::<Sample>(queue);
spawn(blocks::freq_shift(rx_rf_recv, rx_base_send, -75, 2 * 1024));

let (rx_down_send, rx_down_recv) = channel::<Sample>(queue);
    blocks::DownsampleOpts {
        chunk_size: 4096,

/* … */

cpal requires me to provide a callback which writes the audio data into a buffer. Since that callback is invoked by a thread that I don't control, I used tokio::runtime::Handle::block_on to be able to await new data:

let rt = tokio::runtime::Handle::current();
/* … */
let host = cpal::default_host();
let device = host
    .expect("no output device available");
let supported_ranges = device
    .expect("no supported audio config");
let range = supported_ranges
    .filter(|range| {
        range.channels() == 1
            && range.min_sample_rate().0 <= 48000
            && range.max_sample_rate().0 >= 48000
            && range.sample_format() == cpal::SampleFormat::F32
    .expect("no suitable audio config found");
let supported_config = range.with_sample_rate(cpal::SampleRate(48000));
let mut buffer_size = 2 * 4096;
match supported_config.buffer_size() {
    &cpal::SupportedBufferSize::Range { min, max } => {
        buffer_size = buffer_size.min(max).max(min)
    &cpal::SupportedBufferSize::Unknown => (),
let config = cpal::StreamConfig {
    channels: 1,
    sample_rate: cpal::SampleRate(48000),
    buffer_size: cpal::BufferSize::Fixed(buffer_size),
let err_fn = |err| eprintln!("an error occurred on the output audio stream: {}", err);
/* … */
let write_audio = move |data: &mut [f32], _: &cpal::OutputCallbackInfo| {
    for sample in data.iter_mut() {
        // method `recv_realtime` is async, i.e. it returns a future
        /* … */ rt.block_on(rx_audio_down_recv.recv_realtime(0)) /* … */
        *sample = /* … */;
        /* … */
let stream = device
    .build_output_stream(&config, write_audio, err_fn)

It's still work in progress and a bit ugly, but I'm happy it works. And I'm especially happy that the audio delay is low (which is relevant for realtime radio applications). To keep the audio delay low, I manually set a small buffer (might be good to test automatically how low it can be without causing underflows), and I monitor the len of the last tokio::sync::broadcast::Receiver to be small and discard chunks if there is congestion (which might happen because the time basis of the receiver and the audio device are not exactly synchronous).

So cpal and rustfft do fine!

Only thing that I'm missing in rustfft is some specialized transforms, e.g. for real-valued signals or chunks where half of the data is zero, etc. I currently work almost entirely in the complex domain, which makes the code a bit easier to overlook but might come with a bit of unnecessary overhead when the imaginary part is known to be zero.

P.S.: I tested this workflow on FreeBSD and with this SDR stick, but also want to try out the LimeSDR Mini to be able to transmit (the RTL SDR can only receive). Haven't tested this on Windows or Mac yet.

microfft might work for optimizing on real-valued signals.


I wrote a small benchmark to compare rustfft (0.6.1) and microfft (0.5.0).

rustfft = "6.0.1"
microfft = "0.5.0"
rand = "0.8.5"
num-complex = "0.4.2"
use num_complex::Complex32;
use rand::{
    distributions::{self, Distribution as _},
    thread_rng, Rng,

use std::f32::consts::TAU;
use std::time::Instant;

fn random_complex<R>(rng: &mut R) -> Complex32
    R: ?Sized + Rng,
    let u1: f32 = distributions::OpenClosed01.sample(rng);
    let u2: f32 = distributions::Standard.sample(rng);
    let abs = (-2.0 * u1.ln()).sqrt();
    let (b, a) = (u2 * TAU).sin_cos();
    Complex32 {
        re: abs * a,
        im: abs * b,

fn new_test_vector(len: usize) -> Vec<Complex32> {
    let mut rng = thread_rng();
    (0..len).map(|_| random_complex(&mut rng)).collect()

fn new_test_vectors(len: usize, count: usize) -> Vec<Vec<Complex32>> {
    (0..count).map(|_| new_test_vector(len)).collect()

fn benchmark<F>(name: &str, func: F)
    F: FnOnce(),
    let start = Instant::now();
    let duration = Instant::now().duration_since(start);
    println!("{}: {} ms", name, duration.as_millis());

fn main() {
    let nvecs = 10000;
    let rounds = 10;
        let mut vecs = new_test_vectors(4096, nvecs);
        let fft =
        benchmark("rustfft(4096)", || {
            for _ in 0..rounds {
                for vec in vecs.iter_mut() {
        let mut vecs = new_test_vectors(4096, nvecs);
        benchmark("microfft(4096)", || {
            for _ in 0..rounds {
                for vec in vecs.iter_mut() {
                    let _ = microfft::complex::cfft_4096(
                        (&mut **vec).try_into().unwrap(),

On my machine, I get (with cargo run --release):

rustfft(4096): 693 ms
microfft(4096): 3219 ms

And with a blocksize of 256:

rustfft(256): 28 ms
microfft(256): 144 ms

So microfft is about 4-5 times slower. (Not sure how well my benchmark is done.)

microfft may be nice for embedded systems, but I think for my application case, I will stick to rustfft and just fill the imaginary parts with zeroes. It might still be faster than other options. Most processing will be complex-valued anyway. The real samples are usually with a lower samplerate, so it's not that big of an issue.

1 Like

You should in general use something like criterion - Rust to verify you're seeing a real effect, but that's a big enough gap there's little doubt!


If you're still looking for FFTs optimized for real inputs, check out realfft. It uses RustFFT under the hood, but packs the data into a complex-to-complex FFT of half size, so the FFT is theoretically computed twice as fast.

1 Like

Thanks for that hint.

Right now, I decided to process everything as Complex, including the audio output blocks, so I don't have to do unnecessary copies at least:

Module radiorust::blocks

Signal processing blocks that can be connected with each other

This module and its submodules contain signal processing blocks, which will produce or consume data of type Samples<Complex<Flt>>, where Samples are chunks of data with a specified sample_rate. Complex<Flt> is a complex number where real and imaginary part are of type Flt. Blocks will require that Flt implements Float, i.e. Flt is either f32 or f64, depending on desired precision.

Note: For real valued samples, use Complex with an imaginary part of zero. This allows using blocks which are implemented with a complex fourier transform.

So, for example, the audio playback block just extracts the real part of the signal (source in radiorust).

This works well with signals that are:

  • "real" signals encoded as Complex, i.e. where their negative frequency components are the complex conjugate of their respective positive frequency component,
  • signals with only positive frequencies,
  • signals with only negative frequencies.

The only confusing part is a 3 dB (50% energy) mismatch in regard to the audio level (contained energy per time) of signals, depending on whether positive and negative frequencies are included, or not.

However, even if the copying is avoided, it bloats up the data stream by a factor of 2 in either of the above cases, where only the real part is used.

So I may be looking into realfft to improve this. I think ultimately, I will also have to cover cases such as multiple channels (e.g. stereo). So when I get to that, I may revisit optimizing the real-valued case for signal processing.

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.