How to properly close a TcpListener, in multi-thread server?

Not necessarily, you can just start a new write loop. Your first loop could also start and the current state of your machine is that the socket is currently not writable. So, you actually have the same precondition about your very first write loop and this does not make any problem.

Yes, very first write() loop has to wait for the initial "writable" event. Then we write until either all data has been written (for now!) or we get the WouldBlock error. In the latter case, we have to wait for the next "writable" event, and, once that happens, we again enter the write() loop, trying to write the remainder of the data. Eventually, we leave the write() loop after all data was written and we are done for now. When this happens, the last write() clearly did not finish with WouldBlock error, but wrote the final piece of our data. TTBOMK, there is no way to "enforce" a final WouldBlock error.

If, at a later time, want to write another chunk of data to the same stream, we have to be careful! Because, the previous time, we did not leave with a WouldBlock error, the event has not been rearmed. If, now, we would wait for a "writable" event – as we normally do before each write() attempt – it would result in a deadlock. The event is never triggered! So, we have to "remember" in which state we left the last time. If we left without WouldBlock error, we must not wait for "readable" event initially :hushed:

No, you actually only wait if you received a WouldBlock. You always start the loop by just trying to write directly. Then you have a solution that works in both cases.

If you implemented this by having some sort of a "write buffer" which is consumed as much as possible each time the writable state occurs. You must then rewrite the function that add to this buffer to always trigger the writing function (which call write in a loop consuming the write buffer) as if a writable state occured. That is how you can treat all the cases easily.

No, you actually only wait if you received a WouldBlock. You always start the loop by just trying to write directly. Then you have a solution that works in both cases.

Thought about this myself :thinking:

But, if we always try to write() immediately, without waiting for an initial "writable" event, and if we only wait for the next "writable" event after a WouldBlock error was encountered, then it could happen that we begin write()ing while there is a pending (not yet handeled) "writeable" event in the queue, right? If so, the pending "writable" event is never consumed! Then, once we actually encounter the WouldBlock error and wait for the next "writable" event, we'll immediately consume the old/pending "writable" event. It would seem that we can write again, but in fact we only saw and old event that we had "missed" before...

Or do I worry too much and this case can safely be ignored?

Ultimately there's only a few options:

  1. Your project can use an existing async/nonblocking IO crate instead of doing blocking IO using std::net APIs.
  2. Your project can implement accept_timeout itself using select/poll/epoll, as Java does on platforms where SO_TIMEOUT isn't supported.
  3. std::net can implement accept_timeout for you, using the same select/poll/epoll based implementation as 2.

If you don't want to do 1 for your own reasons, then obviously having 3 implemented in std is less work for you, but the implementation is going to be basically the same either way, and the bar to get something added to std is, understandably, higher than putting it into your own project. If you think having this functionality in std is desirable then the best first step is likely to implement it in a portable way in a crate, both to prototype an implementation for std and to see how widely it's used.

1 Like

That's not a problem, you will just have a spurious writable event, you will then try write which will immediately return WouldBlock.

That's actually how you should already treat all events. From mio doc:

It is important to never assume that, just because a readiness event was received, that the associated operation will succeed as well.

(Spurious Events)

It seems there's an example solution of this question exactly on the man page for epoll.

The example there uses sockets in non blocking mode, and epoll notifies when there might be new connections to accept. epoll has the timeout argument you're looking for, as noted elsewhere in this thread.

Does this worl for your use case? It seems mio::poll is a fairly thin wrapper around epoll, so it should be straightforward to implement the same technique.

That's actually what I have reverted to now :slight_smile:

Though, if we use mio::TcpListener to accept() the connection, then we'll also have to use mio::TcpStream (rather than std::net::TcpStream) for the rest. It means that we have to deal with mio events and polling all along, which has caused me some headache of its own...

If you don't want to do 1 for your own reasons, then obviously having 3 implemented in std is less work for you, but the implementation is going to be basically the same either way, and the bar to get something added to std is, understandably, higher than putting it into your own project. If you think having this functionality in std is desirable then the best first step is likely to implement it in a portable way in a crate, both to prototype an implementation for std and to see how widely it's used.

I really have no idea how to approach this :sweat_smile:

The std::net::TcpListener hides all the implementation details, e.g. the "native" file descriptor to the socket, as "private", so I can not access it and build on top of it, e.g. by using something like libc.

It means I'd pretty much have to re-implement TcpListener from the ground up?

Probably a bit above my head :weary:

Note: Even though I participated occasionally, I haven't read the full thread.

Likely not just TcpListener but also TcpStream, etc.

I concluded on IRLO a while ago:

Concluding, I have to say that I can work with Rust's standard library to write a network application – but it is a hassle.

I believe for real use-cases, the Rust std library is unsuitable.

After I learned that Rust has made some other mistakes in std's design, particularly this one with AsRef, and struggles with fixing those mistakes, I understand that things aren't added to std lightly. I think we have to see std's I/O interfaces as a bare-minimum interface to the OS, which behaves platform-dependent in many cases (even though it ideally shouldn't!).

So the only option really is to do something outside std.

To my knowledge, tokio is one of the most advanced I/O libraries. The "problem" is, that it requires you to deal with async Rust, which is complex to understand. But does that mean you have to make your whole program async? I'm unsure. Maybe it's possible to provide a more or less thin wrapper for the exhaustive work put into tokio, which allows using the I/O interface from synchronous code. (Maybe such a wrapper would be not so thin if you want to get timeouts and closing right.) Afterall, there are block_on methods on the runtime and on runtime's handles, which allow invoking async code by blocking non-async threads. Maybe that would be the easiest approach for you?

That said, I would appreciate if there was a better synchronous library for I/O because as said above: "it is a hassle [to use Rust's standard library to write a network application]".

That is not true: Rust Playground

2 Likes

That is not true: Rust Playground

use std::os::fd::AsRawFd;

I see!

One janky workaround would be to swap it out for an incompatible file descriptor type - dup2(socket, open("/dev/full")); threads.join(); close(devfull);

That does seem like it should work since dup2 is atomic, though the errors you might get out of the thread might be surprising depending on exactly where the other threads are at the time, and obviously it puts the TcpListener into an unexpected and broken state where all subsequent usage is.. questionable.

1 Like

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.