Suppose we have an highly asynchronous system that sends and receives a huge number of datagrams and it should be optimized for speed. The system is based on tokio (actually it is based on the actix actor model).
Now the (maybe) naive way to implement this, is to use a single tokio udp-socket for sending and receiving. For every incoming datagram we could spawn a new actor that process the datagram. Outgoing datagrams from actors are also send through the same socket.
However I'm not sure, whether or not this is the most effective approach?
Maybe its better to have two sockets (say binded to different ports). One, that handles only incoming datagrams and the other solely for the outgoing datagrams.
The reasoning here is, that this way, transmission might be faster, but I don't know if this makes sense, really. Is there even a rational answer to this? On the same reasoning one could argue, that $n$ sockets handling outgoing datagrams are better then one. Maybe one for each actor? However actors might come and go in and out of existence and creating a new socket each time might be slow.
I think the problem I have here is, that I don't know how to find an answer to this question, that is purely theoretical. Maybe the only way to find out is to implement all versions and make a benchmark test.