Best way to send Frame over Tcp Stream?

I have two programes , the client programe will send data to server programe over TCP. Since TCP is stream-based and my data is defined as frame, it occurs that two frames are in the same TCP package.

I am trying to find a solution for this. Here are some ideas, but I'm not sure if they fit my situation.

  1. Implement encoding and decoding by some crate. tokio_util::codec seems to be the best solution. But the question is , the server programe is async (implemented by tokio), the client is sync (just use std::net). The tokio_util can't be used in sync code, is there any other crate could work?

  2. Similar to length_delimited, Send the data length before data. The server read the length first, then read the data of specified length from stream.

    let len = stream.read_u64().await.unwrap() as usize;
    let mut buf = Vec::with_capacity(len);
    stream.read_exact(&mut buf).await.unwrap();

If so,buf needs to be dynamically allocated every time data reach.I'm worried this will have performance issues.

  1. Similar to #2 , Copy all data to a buffer when recieved, then read the length and data form the buffer.
    But if the frame data is divided into two TCP packages, I need some extra code to solve it.(maybe a state machine is needed)

This seems like a simple question, but I can't find a graceful and simple solution. I really need some advice about it..

There are a couple of things that are true regardless of packet splitting:

  • If you know the size of the data to be expected, then you can use an array
  • If you have an upper limit on the data size, you can still use an array. This may be inefficient if there are outliers in the data size distribution.
  • If the data size can be anything, you have to use a Vec.

This is fairly clean.

The code for this isn't too bad either.

It would not be especially difficult to write a synchronous version of tokio_util::codec::Framed. You can still use the Encoder/Decoder traits from tokio-util.

As for your second option, this is pretty common. The trick is to reuse the vector so that you don't have to allocate every time.

Thanks for you two guys! I didn't consider using array as buff.

    let mut data_buffer = [0; 256];
    let len = stream.read_u64().await.unwrap() as usize;
    stream.read_exact(&mut data_buffer[0..len]).await.unwrap();

This code could work now, I'm not sure if it's stable enough. I 'll use "tokio_util::codec::Framed" to improve the code!