Tokio - asynchronous read from multiple streams


I'm fairly new to Rust, and am attempting to implement an asynchronous read loop with two tokio streams. Specifically, I'm creating one tokio::net::TcpStream which is connected to a remote service and a reference to tokio::io::stdin. I've disabled ICANON on the stdio file descriptor to disable line-buffering.

Basically, this is just supposed to be acting kind of like netcat. It's not my end-goal, but was intended as an exercise for myself to learn more about how to handle multiple async streams. The main function looks like this:

pub async fn main() -> Result<(), std::io::Error> {

    // Local buffers
    let mut input_buffer: [u8; 64] = [0; 64];
    let mut output_buffer: [u8; 64] = [0; 64];

    // Connect to the client shell
    let mut client = TcpStream::connect("").await?;

    // Get a stdin object
    let mut stdin = tokio::io::stdin();

    // Disable echo and canonical mode on stdin
    let stdin_fild = stdin.as_raw_fd();
    let mut termios = Termios::from_fd(stdin_fild)?;
    termios.c_lflag &= !(ECHO | ICANON);
    tcsetattr(stdin_fild, TCSANOW, &mut termios)?;

    // Get stdout object
    let mut stdout = tokio::io::stdout();

    loop {
        let result: (usize, i32) = tokio::select! {
            r = input_buffer) => (r.unwrap(), 0),
            r = output_buffer) => (r.unwrap(), 1),

        if result.0 == 0 {
        } else if result.1== 0 {
            client.write(&mut input_buffer[..result.0]).await?;
        } else {
            stdout.write(&mut output_buffer[..result.0]).await?;

It actually works rather well in one direction. If I start a service with nc -lnvp 4444 and then connect to it with my test application, whatever I type in my test application is sent correctly to the client (without line-buffering). However, any input to netcat is only sent after a line-feed. I can't find anything in the tokio docs that mentions any sort of line-buffering being implemented so I'm not sure why this is happening at all.

Any help or pointers toward the relevant documentation would be appreciated. Thanks!

Try flushing stdout? Also, remember to use write_all.

That's actually a great point, and I feel kind of dumb for not trying before, but sadly didn't solve the issue. I also thought it might just be my terminal on the netcat side buffering before sending, but I tried do stty raw -echo && nc -lnvp 4444 && stty sane, echo -n "hello" | nc -lnvp 4444 and stdbuf -i 0 -o 0 nc -lnvp 4444 all with the same effect. It's not being displayed from my rust application without a new line.

I also tried explicitly writing a new line after whatever data I read from the TcpStream to make sure it wasn't being line-buffered on the rust-side. Specifically, it looks like this now:

        } else {
            stdout.write_all(&mut output_buffer[..result.0]).await?;

I'll keep tinkering.

For anyone else who may come across this in the future, Rust's stdout object is apparently wrapped in line-buffered reader by default and there is no cross-platform way to change this from what I can find. I tried flushing the buffer after every write, but it didn't seem to help (upon further testing, including a new line after each write did cause the output to be displayed immediately). I ended up constructing a std::fs::File object from the raw stdout file descriptor (1) and then creating a tokio::fs::File object from that standard file object. This is specific to Unix and won't work on Windows. It looks like this:

    let stdout_file;
    unsafe {
        stdout_file = File::from_raw_fd(1);
    let mut stdout = tokio::fs::File::from_std(stdout_file);

This new stdout object is now not line-buffered.

1 Like

Be aware of the destructor of that file. It will close stdout.

1 Like

I'll keep that in mind. Thanks for the heads up!

Also, for the sake of keeping track of this in case someone else stumbles on the post, there is currently work being done upstream to add support for enabling/disabling line-buffering on stdio. The relevant issue is here.

It seems like a big lift, and the current developer mentioned he'd been working on it all summer. I don't expect anything in the near future, but if someone is reading this in the future, the feature may already be available natively.

1 Like