There is no one idomatic approach, because there's two separate issues embedded in your question:
whether the queue is bounded or not depends on the consequences of that bound being filled, and more generally, how you want to handle backpressure within your system. If the client suddenly throws an enormous amount of data at you (which you can't trust it to not do, because in general you can't trust the client) do you want to attempt to buffer all that data? Is (parts of) the system blocking until the data can be dealt with acceptable?
Whether you have one or multiple channels depends largely on what you're doing with the messages and how parallizable it is. If your game state is a monolithic blob, having multiple queues accepting the messages won't buy you anything, because they'll all have to wait for exclusive access to the state. OTOH, if the game state can be divided into lots of independent pieces, you might have N channels to apply changes to those pieces in parallel
This depends on a few factors, but what you should be thinking is about is the desired behavior when processing can't keep up and the queues begin to fill with messages:
Do you want "fairness" between the clients? Then maybe N channels served in a round-robin fashion would be better than 1 channel.
Do you absolutely need to process every message, or could you drop messages when the queue is full? In the latter case, then you could use try_send on a bounded queue instead of send which waits for a space in the queue.
Is it not OK for the senders to wait OR for messages to be dropped? Then use unbounded queue(s).
Note that bounded queues using the blocking send can causes deadlocks when there's inter-dependent queues that fill up, so care should be taken to to avoid that.
Unbounded is a recipe for running out of memory. Nothing is really infinite, so whether you like it or not, every channel will have a depth limit. Unbounded just won't handle it gracefully.
Depth 0/1 may unnecessarily block senders when the receiver is busy or just waiting on the OS to run the thread.
If you can block on the sender side without problems, pick something high enough to handle typical bursts of sends, but still reasonably low, e.g. number of players, or double that.
If blocking is undesirable, pick something very high, and threat queue full as a error.
Yes. Most elements of your program are inherently βboundedβ β not going to allocate arbitrary memory β simply by being sequential. When you introduce a channel β meaning the sender and receiver are concurrent β you start needing to introduce explicit back-pressure via mechanisms such as bounded channels.
The back-pressure must be able to be propagated within your program all the way back to the part where you read data from the network, so that incoming messages (whether accident or DoS) cannot cause any part of your program to over-allocate. The network will then propagate this back-pressure to the computer sending the data.
You can also choose to close connections β this might be an appropriate solution if all individual connections are behaving reasonably but you have too many of them to serve at full speed. It depends on whether, for your game, it is better to provide slow service to many players, or normal service to a limited number of players; this will depend on game mechanics (e.g. turn-based games have much lower latency requirements than FPSes) and player expectations.