Using async-bincode with async-std

Hi everyone!

I'm having trouble integrating the async-bincode crate (v0.5.1) with async-std (v0.6.2). Specifically, I'm trying to read from a async_std::net::TcpStream in a task/thread, deserialize the data with bincode, and send it through a crossbeam synchronous channel.

The original implementation used a std::net::TcpStream with no problem, but migrating to async_std has raised a number of issues, chiefly that bincode::deserialize_from accepts a std::io::Read trait and not the async_std::io::Read trait. To solve this problem, I reached for async-bincode.

Unfortunately, I struggled to find usage examples that fit my use case. In addition, it seems to use tokio in the backend. Here is a simplified view of my code:

async fn read_from_stream(mut input: AsyncBincodeReader<TcpStream, T>, output: Sender<T>)
where T: for<'de> serde::de::Deserialize<'de> {
    const MAX_REDUCTIONS: usize = 2000;
    let mut reductions = 0;

    loop {
        input.for_each(|data| {
            if let Ok(_) = output.try_send(data) {
                reductions += 1;
            } else {
                return
            }
        });

        if reductions == MAX_REDUCTIONS {
            reductions = 0;
            task::yield_now().await
        }
    }
}

Here is the error:

I've already imported the Stream trait from async_std like so: use async_std::prelude::*;. I assume the problem is due to trait bounds, but as I'm somewhat new to the language I'm struggling to diagnose and resolve the issue.

Ideally, I would like to avoid Streams altogether because I want the data to be ordered, but for now I just want the code to work as an MVP.

Thanks for your help in advance!

I recommend reading into a buffer and using the slice based methods in the bincode library.

Thanks for the suggestion. I think I might have to do that and forego the async-bincode crate.

The async-bincode crate allocates a buffer of 8192 bytes per reader, as seen here: AsyncBincodeReader buffer alloc.

I do want to be able to scale to large blobs eventually, but do you think this is a reasonable buffer size as a default?

It's probably fine. If you include a length field before each bincode-encoded packet, you can pre-reserve a large enough buffer.

1 Like

I'm running into a silly problem I can't seem to crack, and I was wondering if you could help me.

I'm creating a buffer of size 8192 bytes. I'm passing the buffer to the read function, but it never reads anything from the TcpStream. This is a synchronous example, with std::net::TcpStream as well.

async fn read_from_stream(mut input: std::net::TcpStream, output: Sender<T>)
where T: for<'de> serde::de::Deserialize<'de> {
    use std::io::Read;

    const MAX_REDUCTIONS: usize = 2000;
    let mut reductions: usize = 0;

    let mut buffer = Vec::with_capacity(8192);

    println!("reading!");

    loop {
        if let Ok(_) = input.read(buffer.as_mut_slice()) {
            if let Ok(data) = bincode::deserialize(buffer.as_slice()) {
                println!("got some data!");
                if let Ok(_) = output.try_send(data) {
                    continue;
                }
            }
        }

        // println!("yielding read");
        task::yield_now().await
    }
}

If I print the contents of the read Result, it's always 0 bytes. I'm not sure what I'm doing wrong. What do you think?

The vector is empty, so the slice has length zero.

- let mut buffer = Vec::with_capacity(8192);
+ let mut buffer = vec![0_u8; 8192];
  • (and you can even drop the vec! part altogether).
1 Like

Thanks for the answers! This worked for me :grin:

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.