DNS error when sending >1024 parallel requests with reqwest

I'm trying to stress test a service I'm running locally on my PC, and for this I tries using tokio + reqwest.

I tried limiting concurrent accesses to the client:

struct SpamClient {
    client: reqwest::Client,
    semaphore: Arc<tokio::sync::Semaphore>,

I'm sending requests like this:

    let spam_client = SpamClient {
        semaphore: Arc::new(tokio::sync::Semaphore::new(1000)),
        client: reqwest::Client::builder()

    let tasks = (0..cli.count)
        .map(|i| async {
            let permit = spam_client.semaphore.acquire().await.unwrap();
            let result = spam_client.client.clone().get(URL).send().await;
    let mut tasks = futures::future::join_all(tasks).await;

However when I send more than >1024 requests I get error like this:

Error: reqwest::Error { kind: Request, url: Url { scheme: "http", cannot_be_a_base: false, username: "", password: None, host: Some(Domain("localhost")), port: Some(8983), path: "/solr/radio/select", query: Some("q=utt:hello"), fragment: None }, source: hyper::Error(Connect, ConnectError("dns error", Os { code: 16, kind: ResourceBusy, message: "Device or resource busy" })) }

Is there any workaround I could use to stop this error from happening? The thing I would like to do generally is to send as many requests as possible as quick as possible

Try replacing localhost in your URL with That way you shouldn't make any DNS requests at all.

It helped with the DNS error, but now I'm getting a different kind of an error instead:

Error: reqwest::Error { kind: Request, url: Url { scheme: "http", cannot_be_a_base: false, username: "", password: None, host: Some(Ipv4(, port: Some(8983), path: "/solr/radio/select", query: Some("q=utt:hello"), fragment: None }, source: hyper::Error(Connect, ConnectError("tcp open error", Os { code: 24, kind: Uncategorized, message: "Too many open files" })) }

Probably related to the ulimit setting of your system. Usually it default to 1024 max file handles per process.

1 Like

That means that you are trying to open too many sockets, which is prevented by your OS. You can see the limits imposed by linux by running ulimit -Sa and increase the amount of file descriptors you can open with ulimit -n <number>.

1 Like

Would it be possible to add a cap on open sockets inside the Rust program, so that it never reaches the limit? For example by making it wait and release the old connections once it reaches 800 open sockets.

I think I could do it by creating a completely new reqwest::Client for each batch of ~800 requests, but I'm not sure if there's a more idiomatic way to achieve the same thing?

I have no idea how reqwest/hyper handles connection pooling and socket allocation internally.
But I think you should be able to avoid creating a new connection pool with 800 new sockets if you were to split your cli.count into epochs of 800 requests, join the responses with fututres::future::join_all and spawn the next 800 requests after awaiting the responses:

for epoch in 0..cli.count / 800 {
    let tasks = (0..800).map(|_| spam_client.client.get(URL).send());

Whether you deem this a good way to stress-test your solr server is up to you.

This makes the entire batch wait for the slowest request. Semaphore controls this better, because it will ensure there's right amount of tasks in flight at any time.

None that I know of. You could check the Error object to identify it it's ResourceBusy or "Too many open files" error, wait some arbitrary amount of time, and retry.

1 Like

That's true but as I understood the OP, the semaphore solution does not prevent too many sockets being spawned by the connection pool. I was hoping that waiting for batches of requests would cause the connection pool to re-use previously spawned connections better, but not that I'm writing this, I actually think that my reasoning there may be faulty and there's some other reason why hyper keeps on creating new sockets instead of re-using the old ones.

I was able to get around that file handles error by writing it like this:

    let mut tasks = futures::stream::iter(Vec::from_iter(0..cli.count).chunks(200))
        .flat_map(|chunk| {
            let client = Client::new();
                .map(move |i| client.get(format!("{URL} OR utt:{i}")).send())

This manages to reach a consistent ~1000 requests a second and doesn't seem to cause that problem with too many open files. By the way, these magic numbers: 200 and 800, I've chosen them arbitrarily and changing them doesn't seem to have much effect on the execution speed.

The problem here is that 1024 limitation is per-process, not per-library.

And it's for everything: connections to databases, networks sockets, open files, etc.

Only developer may know what to use that quota for: do you want 1000 sockets and 20 files or 1000 files and 20 sockets? This would depend on the nature of app you are making.

Also most operation systems give you knobs to raise these limits if needed, but that, sometimes, requires admin access.

Lots of complications which definitely don't belong into general-purpose http handling library, I'm afraid.

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.