[SOLVED] Testing a TCP server gives "cannot assign address" error


#1

Hi there,

I’m having issues creating a test program for a small server. I want to create the test program in Rust, and the server itself is written in Rust. I create dozens of connections and send random data to it. The problem is I seem to be creating too many connections, and I run out of random ports.

I get this error: http://stackoverflow.com/questions/7640619/cannot-assign-requested-address-possible-causes

The connections are closed (they are in a loop and get created and closed in each iteration) but seems that SO_REUSEADDR is not set up. How can I set it up in the Rust’s std::net::TcpStream?


#2

It’s not super clear from your description where the error is coming from. SO_REUSEADDR is generally only necessary for servers. If you’re getting that error on the client you’re likely just opening too many connections.

If you do want to set the socket option, you can call libc::setsockopt on the fd you can get through std::os::unix::io::AsRawFd.


#3

Yes, probably I did not explain my situation correctly. I get the error on the client because I’m doing fuzzing, and creating multiple connections to see where it could fail. It’s almost a basic TCP echo server I would like to test for all possible input.

The thing is that it works simply by receiving some data and returning a response, and then it closes the socket. (Maybe this could be improved?) the thing is I get out of addresses in the client, once I’ve tried about 30k connections.

I’m trying to use that libc::setsockopt and I managed to do something like this:

let fd = stream.as_raw_fd();
            unsafe {libc::setsockopt(fd, libc::SOL_SOCKET, libc::SO_REUSEADDR, &(1 as c_int) as _ as *const c_void, mem::size_of::<c_int>() as socklen_t);}

But I cannot get the &(int){ 1 } C code to fit here, how can I do it?


#4

The main problem with your situation is that TCP simply does not support opening that many connections to the same destination address. A TCP connection is uniquely identified by the tuple (src addr,src port,dst addr,dst port). Even once a connection is closed, duplicate packets (taking a slow path) might still be in flight on the network. TCP specifies that you need to wait a while (minutes) before reusing the tuple, which gives those straggler packets some time to be ignored (instead of those stragglers interfering with a new connection). Normally this is not a problem because you can just switch to a different source port if you immediately need to talk to the same destination. However, when opening so many connections you’re running out of source ports.

Possible solutions:

  1. Which end of the connection ends up in the TIME_WAIT state depends on which side closes the connection first. You might be able to change which side closes it. http://blog.davidvassallo.me/2010/07/13/time_wait-and-port-reuse/
  2. There might be OS-specific ways to change the behavior of the TCP TIME_WAIT state in violation of the TCP specification.
  3. Use UNIX sockets instead.
  4. Use multiple destination IP addresses and/or ports.
  5. SO_REUSEADDR normally has no effect on clients, since according to the socket man page, it only affects the bind call, which is not normally used for client connections. You can manually bind, but this is not supported in Rust TcpStream.
  6. Use multiple source IP addresses. Again, this requires binding on the client which Rust TcpStream does not support.

Just write 1usize as _


#5

Could it be possible to reduce the connections?
I’m use the shutdown() method to send the data (since it seems that when I write() in the client I cannot read() in the server), but maybe there is a way to send a packet to end current data stream and start a new one. Is there?


#6

I would advise you to make your data packets self-delimiting so you do not rely on TCP-level end notification. While TCP distinguishes RST from FIN at the protocol level, this signal is not very useful to applications because there are ways a FIN can be erroneously generated; for instance, if the client code crashes with a signal (POSIX) it will close all file descriptors, which causes FIN packets to be sent even though the stream is not logically over.

Consider why Content-Length was added to HTTP, and the implications for your own protocol.

You’re probably going to need to use a connection pool of some kind for the testing process.


#7

If you know the time that a connection remains in TIME_WAIT state, you could limit the rate of new connections in the client such that the number of connections in TIME_WAIT state never exceeds a certain amount.

E.g. if you know that connections will be 5 minutes in TIME_WAIT state and the maximum number of connections is 30k:

rate = 30000 / 300s = 100 connections/second

#8

OK, I added a content-length parameter to the protocol and I can now send all the requests I want in the same stream, so everything is working perfectly, thanks!!