Why does TcpStream recognizes client dropped connection when reading but not when writing

Please see code example below. You can see that after the client connection is dropped

./examples/file.rs

use std::io::Error;
use std::io::{Read, Write};

use std::net::{TcpListener, TcpStream};
const EOF: usize = 0;
fn read(stream: &mut TcpStream) -> Result<usize, Error> {
    let mut buf = [1; 10];
    let n = stream.read(&mut buf)?;
    println!("recv: {:x?}", &buf[..n]);
    Ok(n)
}

fn write(stream: &mut TcpStream) -> Result<usize, Error> {
    let mut buf = [1; 10];
    let n = stream.write(&mut buf)?;
    println!("send: {:x?}", &buf[..n]);
    Ok(n)
}

fn main() -> Result<(), Error> {
    let addr = "0.0.0.0:8080";
    let acp = TcpListener::bind(addr)?;
    let mut clt = TcpStream::connect(addr)?;
    let (mut svc, _addr) = acp.accept()?;
    println!("clt: {:?}, svc: {:?}", clt, svc);

    assert_ne!(write(&mut clt)?, EOF);
    assert_ne!(read(&mut svc)?, EOF);
    
    drop(clt);
    
    // Why does read immediatelly recognizes that client reset connection
    assert_eq!(read(&mut svc)?, EOF);  // pass - as expected - client disconnected
    assert_eq!(write(&mut svc)?, EOF); // fail - NOT as expected - does not realize client disconnected
    Ok(())
}

Error i get for a reference.

thread 'main' panicked at experimentation/examples/close_stream.rs:35:5:
assertion `left == right` failed
  left: 10
 right: 0
stack backtrace:
   0: rust_begin_unwind
             at /rustc/2f5df8a94bb3c5fae4e3fcbfc8ef20f1f976cb19/library/std/src/panicking.rs:619:5
   1: core::panicking::panic_fmt
             at /rustc/2f5df8a94bb3c5fae4e3fcbfc8ef20f1f976cb19/library/core/src/panicking.rs:72:14
   2: core::panicking::assert_failed_inner
   3: core::panicking::assert_failed
             at /rustc/2f5df8a94bb3c5fae4e3fcbfc8ef20f1f976cb19/library/core/src/panicking.rs:269:5
   4: close_stream::main
             at ./experimentation/examples/close_stream.rs:35:5
   5: core::ops::function::FnOnce::call_once
             at /rustc/2f5df8a94bb3c5fae4e3fcbfc8ef20f1f976cb19/library/core/src/ops/func

It's because shutting down your write direction is a normal part of socket operation, but shutting down the read direction only happens during abnormal socket shutdown. Since shutting down your read direction only happens during abnormal shutdown, it's not delivered in the same way as how a normal write-EOF is sent.

It will be detected if you write twice.

The normal way to shut down a socket is for both the client and server to close their write direction separately. (Via the shutdown method.)

1 Like

I don’t entirely follow this explanation but here are my two cents:

  1. I assume that when I call drop(clt) it will effectively call shutdown in both directions, meaning the clt socket can neither send or receive data and of course it is moved so I can’t even access the clt variable.
  2. I assume the clt shut down will send some information over the network to indicate to the svc that no more data can be sent or received.
  3. #2 is confirmed as any read on the svc socket is able to instantly recognize that clt reset connection.
  4. why does it take multiple writes and only a single read to recognize for the svc write that the clt is no longer there?
  5. I am not sure how I can call shutdown in both clt and svc separately as the two can be on different machines and the clt can choose to drop session while svc seems to have no way of detecting and thinks a number of writes were successful but they were not, I assume they just got buffered in the kernel queue and then discarded

It is best not to assume things with software. To tend to go with how you like something to work rather than how something actually works.

The partial* definitive answer

*but this deals with what the OS is doing rather than how applications also get limited by what API the OS provides.