Data Loss on TcpStream

Hi

I'm currently trying to build a simple static file server as part of an ongoing research project. The only real complexity (I thought) was going to be setting an arbitrary TCP socket option I've introduced into a custom Linux kernel based on 4.18. Until the need to set the socket option became apparent I had been using a simple NodeJS fileserver using Node's built in http module.

The problem I have with my Rust server is the browser does not receive all data written to the TcpStream - if I request the same file multiple times I see different amounts of data transferred using Firefox's network tab. This is despite the Content-Length header being set to the correct value.

My server is largely based on the single threaded web server example in the Rust book and I've included representative code below. I've omitted the various use statements and a simple function (parse_path) that converts the received request into a relative filesystem path for brevity.

At this point I can't work out why I'm not seeing the expected results so help is incredibly welcome.

Edit: I forgot to mention - this only happens when triggering a file download of at least a few tens of kilobytes using a browser, smaller files (on the order of 3-5 kilobytes) are fine using a browser and using a command-line tool like curl even the larger files download successfully.

const CUSTOM_TCP_SOCKOPT: c_int = 37;

fn main() -> io::Result {
    let server = TcpListener::bind("0.0.0.0:30080")?;

    for stream in server.incoming() {
        handle_stream(stream?);
    }

    Ok(())
}

fn handle_stream(mut stream: TcpStream) {
    let request_path = parse_path(&mut stream);

    let socket = stream.as_raw_fd();
    let sock_opt_val: c_int = 1;

    unsafe {
        libc::setsockopt(
            fd,
            IPPROTO_TCP,
            CUSTOM_TCP_SOCKOPT,
            &sock_opt_val as *const _ as *const c_void,
            mem::size_of_val(&sock_opt_val).try_into().unwrap(),
        ).unwrap();
    }

    let mut response_body = fs::read(&request_path).unwrap_or(vec![]);
    let mut response_head = format!(
        "HTTP/1.1 200 OK\r\nContent-Type: application/octet-stream\r\nContent-Length: {}\r\n\r\n",
        &response_body.len()
    );

    let response = response_head.into_bytes();
    response.append(&mut response_body);

    stream.write_all(&response).unwrap();
    stream.flush().unwrap();
}

Try something like this:

let file_length = fs::metadata(&request_path)?.len();
let mut response_head = format!(
    "HTTP/1.1 200 OK\r\nContent-Type: application/octet-stream\r\nContent-Length: {}\r\n\r\n",
    file_length
);

stream.write_all(response_head.as_bytes())?;

let mut file = io::File::open(&request_path);
let mut buf = vec![0u8; 8192];
loop {
    let len = file.read(&mut buf[..])?;
    if len == 0 { break; }
    stream.write_all(&buf[0..len])?;
}

stream.flush()?;
1 Like

Are you actually reading the entire request? The client might be waiting until it can send all the request data.

2 Likes

Thank you both for the suggestions - in this instance @troplin was correct, I was only reading enough of the request data to parse the URL path. After changing the size of the buffer I was using in parse_path to ensure the whole request was parsed I saw no more issues.

Thanks again :slight_smile:

This is still not completely correct:
read is not guaranteed to fill the buffer, it's possible that you only get half of the request, even if the buffer would be big enough to hold the entire request.
On the other side if you call read multiple times or if you call read_all, the call will block if there's no more data.

Therefore you have to scan the request for an empty line ("\r\n\r\n") and call read until you've found it.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.