How to work with network sockets?

I have an assignment where I need to create a bot that plays a game. My bot has to connect to a server supervising the game through network sockets.

The server will send plaintext data, like

PLAY { "json": "containing state of the game" }


I have been experimenting with TcpStream and read(), but I stumbled upon an issue where I would need some assistance.

When using a small buffer [u8; 512] or [u8; 1024] it works. read returns as soon as the server sends some data. But the json is to big to fit in the buffer. So I guess I should loop until there is no more data, but how can I know this? read() seems to return Ok(0) only on EOF or if the socket is closed.

I tried to use larger buffers too ([u8; 4096]) but then read() does not return when the server sends only START and my program blocks until the server decides my time is up...

I have two questions:

  1. What is the exact behavior of read() for sockets? When does it return and when does it block?
  2. How should I use the sockets to make sure my program will not block and I can parse all the data?

I think the problem here is more about TCP rather then Rust. I'm not a networking guru, but here is my take on it:

The core of the problem is that the server protocol is message based, and messages (frames) are separated from each other by \n. TCP, on the other hand, is a stream protocol. It does not have a concept of message and "just" delivers a flat stream of bytes. These bytes are delivered in chunks, of course, and these chunks will in general correspond to calls to send on server, but this is not guaranteed.

So the task is to decode the chunked stream into a sequence of messages. Note that there may be several chunks per message or several messages per chunk.

I think the usual tool for handling this is a state machine. I would use the following interface here:

// This we got from the network
struct Buffer(Vec<u8>);

// This is a frame of our protocol. 
// The protocol is text based, but I think that it would be easier to
// first divide binary stream into frames, and then to decode each frame
struct Frame(Vec<u8>);

trait FrameDecoder {
   fn new() -> FrameDecoder;
   // self is mut, because decoder need to save some state between `decode`.
   // Namely, if the message is split into several chunks, than the leftovers must be remembered
   fn next_chunk(&mut self, buffer: Buffer) -> Vec<Frame>

fn main_loop() {
     let mut frame_decoder = MyProtocolFrameDecode::new();
     loop {
         let buffer = sock.recv().unwrap();
         for frame in frame_decoder.next_chunk(buffer) {
            let message = Message::from_raw_bytes(frame);

I would recommend to read UDP vs. TCP | Gaffer On Games series about networking, it's wonderful! Well, at least I wish I had read it before I discovered (from the night-long strace enabled debugging session) Nagle algorithm and its devastating effects on TCP performance :slight_smile:

1 Like

You have to call read until you have enough data to process the request. Design the format of your request such that you can reliably determine the length.
Either explicitly send the length as part of the request or use some marker for the end.

Some examples:

  • HTTP uses:
    • Marker: Empty line for the end of the HTTP header
    • Implicit on EOF (depreated)
    • Explicit Content-Length header or
    • Chunking (split into parts with each an explicit length)
  • SMTP uses:
    • Marker: A line with only a single dot (.)
    • Chunking (BINARYMIME extension)

If your format is JSON and you have an incremental parser, you could also stop reading when parsing was successful (at the last closing brace).
For single line commands, the line ending is a good marker.
You can use functions like std::io::BufRead::read_line or std::io::BufRead::read_until. Those will work as expected.
My own crate netio also contains useful functions for that kind of problems.

But be careful, server and client have to agree on exactly the same length, otherwise it doesn't work and things get out of sync (server or client will block indefinitely)

read should only block if there is no data to read, regardless of the buffer size. Maybe there's a very short delay for better buffering, but it shouldn't be noticeable. If it blocks, you have probably already read too far.


The problem with that is that I don't implement the server. This part is provided by the teachers. It's open source and on Github so I can submit PR's but it's not like I have total freedom.

What do you mean by that?

Of course, I was not implying Rust was at fault here. :slight_smile: The core problem is probably that I have only very little experience with network programming and I might be expecting things that are not possible or not "built-in".

I am not sure I understand what a Frame represents, but I will read the link you gave. It will probably make a lot of things more clear.


I am not sure I understand what a Frame represents,

I am not 100% sure that I am using correct words here, but the frame is basically a single message in the application protocol. TCP is just a stream of bytes without any markers, and it is the job of the application to split this stream into separate parts (frames) which represent messages.

If a text is a stream of letters, then words are frames and you are able to distinguish separate words in text because they are separated by a special marker letter -- white space symbol.

If your protocol is indeed line oriented, then using std::io::BufRead::read_line as suggested by @troplin is probably the simplest solution.

Ah yes, I get it now!

I am not sure about that, I have no idea if Python adds newlines after a send (I don't think I saw any encoded in the server code). I will do a little more testing to see how I can split the data.

This is the easy part. Reads from a TCP socket will block if and only if there are zero bytes in the kernel receive buffer; when a byte is received, the read will return from the kernel. You may have problems with higher-level buffering but I don't think Rust's TcpStream does any.

Ordinarily I'd point you at POSIX but it's rather inexplicit here; read says only "If fildes refers to a socket, read() shall be equivalent to recv() with no flags set." and the recv() page only talks about the message-oriented socket case.

I would recommend using netcat or one of the many functionally equivalent programs to manually send a command and verify how the server delimits its responses.

If your teachers provide the server implementation, they have to document the message format. I don't think that you have to reverse engineer the source code.

I mean that if read blocks, there is not data to read. If you are expecting that data is available and it's not, then either you have misunderstood the protocol or you already have read (and discarded) said data.

For example, the server could send 2 messages at once without waiting for your confirmation. This is usually called pipelining. So when reading the first message, you probably will also read (parts of) the second message in the same buffer. You have to make sure that you don't discard that data. If you do discard it, the you will block afterwards when you try to read the second message.

My advice is to always use a BufReader, which handles this for you. And make sure that you only construct the BufReader once at the beginning and use it during the entire program. Because else the internal buffer is discarded.

It is not immediately obvious how to do this, though. Because once you have a BufReader you cannot use the underlying TcpStream for sending responses anymore.
You have to try_clone the TcpReader and use one instance for reading (with the BufReader) and the other one for writing.