Best approach to make a client that receive message via Udp

I'm working on a Natnet client and I don't know what is the best approach to handle received messages.

My first thought is to use a get_message method that handle the next message.

impl Client {
    pub async fn get_message(&self) -> Result<Message> {
        let buf = [0; 32768];
        let (count, _) = self.socket.recv_from(&buf).await?
        // decode the message
    }
}

Using the client like this :

loop {
    if let Ok(message) = client.get_message().await? {
        /* ... */
    }
}

Here's my questions:

  1. Is it worth to store the buf in the client struct, or to just create a new one at each call of get_message ?
  2. Is it better to spawn a thread to handle messages ? And in this case how to notify the user a new message has been received ? Have you an example create that process like this ?

One thing to keep in mind is that your buffer is stored on the stack. In terms of runtime performance this means "allocating" the buffer will be effectively free (your function prelude already adjusts the stack pointer to make space for all local variables) as opposed to going through the global allocator, but you'll still need to zero out that 32k buffer every time.

That overhead may or may not be a concern for you.

It looks like you are using async , so you won't want to spawn threads per-se, but it's quite feasible for users to spawn new tasks like this:

loop {
    if let Ok(message) = client.get_message().await? {
        tokio::task::spawn(async {
          handle_message().await;
        });
    }
}

In this case, the user is writing the loop {} and client.get_message().await so they will know exactly when a message is received.


You might also want to look at implementing the futures::stream::Stream trait (think of it like an "async iterator") for your Client .

By implementing a common trait it'll make your code compose nicely with others and give your users some useful helpers for free... For example, the for_each() method is quite similar to the explicit loop {} way of using a client. It will run a closure for each item in the stream, waiting for the previous item to be processed before processing the next.

The for_each_concurrent() method will do something similar, except each item is processed concurrently. This gives your users more throughput (don't need to wait for the previous item to be processed before reading the next), but it means you might finish handling messages out of order (e.g. you finish reading and handling the second message before the first has finished being handled).

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.