Stdin to Stdout echo app - slow


I was tasked to find the fastest possible solution for one of our projects, which involves reading lines one by one from stdin, where there may be hours delay between each line, but results are needed immediately after receiving each line.

Being Rust and Go fan I decided to benchmark these two languages. Here is my test code in Rust:

fn main() {
    let mut dt : [u8; 512] = [0; 512]; // more than 512 is not possible
    let stdin = io::stdin();
    let mut stdin = stdin.lock();
    let stdout = io::stdout();
    let mut stdout = stdout.lock();
    loop { dt).unwrap();
        // TODO: something else, which will be developed later

I've written a similiar code in Go and compiled both programs: Rust with "--release" and "-C target-cpu=native"

Then I run a simple loop passing 3000 lines to stdin and reading 3000 times from stdout on one and the same machine - MacBook Pro i7.

The Rust application needs 120ms for this.
The Go application needs 70ms for the same.

Could you tell me, what am I doing wrong?

One problem might be that io::stdout in Rust uses line buffering:

Thank you very much for the fast reply!

Since my application needs to send the “response” immediately after receiving the input, line buffering is something I need, I suppose.

What is the performance like if you pipe stdin directly to stdout using std::io::copy()?

Also, have you tried reading each line by wrapping stdin.lock() with a std::io::BufReader and calling the read_line() method? It may be that the way you are reading leaves half a line read, and because line buffering is enabled you need to wait for the next input before the rest of that line can be read and processed.

Neither std::io::copy, nor wrapping stdin in BufReader, nor also wrapping stdout in BufWriter improves performance.

What BufReader and BufWriter added is more stability - in the code above times vary between 120ms and 140ms, with Buf* they vary only between 120ms and 127ms.

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.