Really, how were you able to saturate your CPUs if the bottleneck is in a single thread handling I/O events?
Sorry for the confusion. I was also using tcp and http keep-alives with h2load. I brought up multiple listen sockets because this load balancing model is commonly configured so each thread is given its own I/O handler. The load balancing topic is discussed in-depth in this article: https://blog.cloudflare.com/the-sad-state-of-linux-socket-balancing/
Aside: The article points out that you will see higher worse-case latency with new connections on multiple accept queues, but better load distribution. You don't want multiple listen sockets for its ability to handle more incoming connections, but so you get better load balancing and therefore higher throughput.