Hyper requests timing out when host is reachable

Thank you for this. So nothing suitable for what I'm doing then! I hadn't considered the first but I haven't the need for that yet.

I had wondered about the maximum connections from the OS. I've checked it by:

$ sysctl net.ipv4.ip_local_port_range
net.ipv4.ip_local_port_range = 32768	60999
$ cat /proc/sys/fs/file-max
1592665

So I think the maximum number is 318,533. as per: Is there a hard limit of 65536 open TCP connections per IP address on linux? - Super User

1 Like

What's throwing me off with this issue is that while the application is running and connections are timing out, I can visit the sites that it is saying are failing and they work fine in a browser. But I don't know if that's because the OS is allowing another application to establish a connection through some kind of balancing mechanism. I have tried a new Hyper Client instance for each request and that too fails.

I'm mostly out of ideas then. What happens if you run the application twice simultaneously, each with 500 connections at the same time?

Good idea! I'll give that a go.

I tried it with two applications and it was fine. I think I might have wrongly read in to the maximum amount of connections that my PC can have at one time. I think that's globally and there's still a maximum number of connections/files that a process can have open at one time. Which for my computer is 1024. So I think I need to find a better way of managing the maximum number of connections that are opened at one time and to possibly detect when this is breached and apply some kind of back pressure to the application.

What happens if you use multiple Tokio runtimes inside a single application?

I'll give that a go tonight :slight_smile:

What would be the best way of limiting the maximum number of futures active here?

        while let Some(req) = rx.recv().await {
            let fut = HostResolver::do_request(req, client.clone(), request_timeout);
            let _ = tokio::spawn(fut);
        }

I feel like I could use FuturesUnordered but I'm not sure how to structure it exactly.

Here is one way:

let mut futs = FuturesUnordered::new();
let max = 50;
while let Some(req) = rx.recv().await {
    while futs.len() >= max {
        // or if you want to store the output, do so here
        let _ = futs.next().await.expect("Panic in spawned task.");
    }
    let fut = HostResolver::do_request(req, client.clone(), request_timeout);
    futs.push(tokio::spawn(fut));
}

while let Some(res) = futs.next().await {
    res.expect("Panic in spawned task.")
}

Thank you!

I've added that in and its most definitely helped to limit the number of futures. I'm still ending up with more open files that I'd like but it's under the maximum per process as I've checked using:

$ ls /proc/8005/fd/ | wc -l
932

And I've set a maximum number of futures to be 900. So it looks like there's some investigating to do. This is also creating a new client per request.

I've tried a few more things:

  • Tried using the reqwest crate and applying a connect timeout so the timeout is only apply for the connection phase and removing my Tokio timeout.
  • Creating a new client every X requests.
  • Creating a new client for every request.
  • Monitoring the active number of open files. This stays well under the OS limit.

But no joy on anything. The application seems to reach a point and then nearly every request times out and fails. Some do go through and I'm guessing those are ones that were scheduled. The errors returned from reqwest are:

error sending request for url (URL): operation timed out

So I'm a bit lost on this one.

I've opened an issue on the Reqwest repository to see if anyone there can provide any advice as to what's going on. I've added a more minimal example of the issue that I'm having: Requests timeout when making a large number · Issue #915 · seanmonstar/reqwest · GitHub

Where are you running this? Could the isp be blocking the requests if there are too many?

I'm running this locally. I've wondered this too which is why I've tried accessing the sites that the program says are timing out (which will work fine) and they're fine.

You could try spinning up a cloud vm somewhere and trying it there. I’m not sure what sort of limitations are put in place for this kind of thing, but it’s at least another data point

I've got a couple of droplet servers that I can try this on later on. I'll try it on them tonight and report back if the issue is still present!

It appears to work on one of my servers, However, it isn't the fastest so I'm not sure if it's possibly because of it being quite slow.

I would have thought that if an ISP was going to perform blocking then it would block the IP that the requests were coming from. Not just the port? If that's possible.

They may treat your program as some sort of malware since it’s making so many requests at once

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.