Magical TcpStream Reads

I was looking at this example on how to read Proxy Protocol headers. There’s some code that magically reads the exact amount of bytes required every time. I built the examples and it works without fail. To me it seems like a naive approach, yet it works fine and has blown a hole in my understanding of how reading from streams works. Here is the code in question,

use ppp::{HeaderResult, PartialResult};
use std::io::{self, prelude::*};
use std::net::{Ipv4Addr, SocketAddr, TcpListener, TcpStream};

const RESPONSE: &str = "HTTP/1.1 200 OK\r\n\r\n";

fn handle_connection(mut client: TcpStream) -> io::Result<()> {
    let mut buffer = [0; 512];
    let mut read = 0;
    let header = loop {
        read += buffer[read..])?;

        let header = HeaderResult::parse(&buffer[]);
        if header.is_complete() {
            break header;

        println!("Incomplete header. Read {} bytes so far.", read);

    match header {
        HeaderResult::V1(Ok(header)) => println!("V1 Header: {}", header),
        HeaderResult::V2(Ok(header)) => println!("V2 Header: {}", header),
        HeaderResult::V1(Err(error)) => {
            eprintln!("[ERROR] V1 {:?} {}", buffer, error);
        HeaderResult::V2(Err(error)) => {
            eprintln!("[ERROR] V2 {:?} {}", buffer, error);


The read inside the loop always ends up pulling the required amount of bytes. How?

Do you mean that it doesn't go around the loop, or that it doesn't read too many bytes?

It never reads too many bytes.

I think it's because the HeaderResult::parse method still works even if there are extra bytes in the buffer.

But if you then parse the remainder of the stream as Http, it’s always Ok. The reads in that loop never overshoot the beginning of the http request.

Edit: I added some http parsing when I was trying this out, but didn’t show it above.

The loop can definitely end up reading too many bytes. Can you give more info as to how you are observing it not reading too far? Are you sure that the client is sending any bytes beyond the ones parsed by your loop?

If you have a look here, it shows how to setup a server and a proxy. Using curl, you send a request to the proxy which prepends the proxy protocol header and forwards it to the server.

If you add some http parsing logic to the handle_connection function above, it will successfully parse the request bytes every time.

I agree that it could and should read too many bytes when done this way.

If the sender uses multiple calls to write they will probably end up on different packets. If you send head and body in a single write then you may over-read when getting the header. Get a dump of what packets are transmitted to see. (eg use wireshark)

1 Like

Yeah, in this example setup, there are a few separate writes on the client side. Each read would get its own packet.

I see why the above would be error prone.

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.