I'd like to be able to cleanly cancel any outstanding HTTP requests in threads at program exit. I have to call something that uses long polling, so I'll have requests outstanding with a 1 minute timeout.
Just exit the main thread and let the stuck thread die. Other threads are not allowed to outlive the main thread; they get killed. No clean shutdown, though. Right now, I get all threads to shut down, join them, log some statistics, and exit. So I'd prefer not to take that route.
Pull in the whole Reqwest/Hyper/Tokio/async monster. I've been trying to avoid that. I have many threads, running at different priorities, some of which are compute-bound, This is not really compatible with the Tokio/async model.
Modify Ureq, which I use now, to have a "shutdown" function. See this issue. for a proposal.
I have many threads, running at different priorities, some of which are compute-bound, This is not really compatible with the Tokio/async model.
You don't have to run all of your computation on the Tokio threads. Everything that you can do to communicate between non-async threads, you can also do with async (except for blocking on a lock that is held for significant periods, but that's usually undesirable anyway).
(And if you're not trying for maximum throughput on your HTTP, then you don't need a large Tokio thread pool.)
Or if you want to stick with threads:
Right now, I get all threads to shut down, join them, log some statistics, and exit.
You could arrange so that all of the data required to create the statistics is continuously delivered to some shared storage or management thread, so that it doesn't matter that you won't get a last report from the stuck thread, since they won't have done anything since the last time they reported.
Yeah, I've found that sampling based metrics tend to be superior for long running processes. On a shutdown request, you'd send out the cancellation signal and set a deadline, ask the metrics library to do a final sample and flush to the collection server (or file), wait the deadline, then exit the process killing anything that hadn't cancelled yet.
The poller reads from a remote site, parses, and sends results over a crossbeam-channel. The only real problem is getting the send end of the crossbeam channel closed, so that the receive end sees an end of file and closes down properly.
Usually, I'd just pass a clone of the Sender end of the channel to the sub-thread. But here, I can pass a clone of
That way, another thread can do a "take" on the Option and drop the result. That closes the channel. The thread at the receive end of the channel then reads the closed channel, detects it, and does all the finish-up work. The poller thread may still be stuck in ureq until program exit, but nothing is waiting for it, so that doesn't matter.
Hacky approach that'll need platform-specific code: Poke the open sockets (via /proc/self/fd on linux. I think windows has a way to enumerate sockets too) and shutdown the TCP connections which (I think) should lead pending reads to return an error.
Another option is to implement the polling yourself instead of delegating it to a library so you can control the sleep and interrupt it early, e.g. via Thread::park_timeout and Thread::unpark or some abstraction built on that.
The classic unix approach to wake up a read is to set a flag, adjust signal handlers if necessary and then send a signal to the thread (e.g. SIGINT). The read will error with EINTR. Many readers ignore that and just retry, but they could check that flag instead before retrying.
Also note that one doesn't need to use async to do cancelable IO. E.g. threads + O_NONBLOCK + (e)poll is a way to do that in a sync fashion on the language level.