Reqwest http client fails when too much concurrency

I'm using reqwest and tokio runtime to perform concurrent http requests (get).

Reqwest documentation states that I should reuse httpClient.

I therefore create a single instance of http client using http client builder::new.

When I distribute it over too many tokio tasks, I start to get many errors (mainly timeout).

However, with up to 32 tasks, it works perfectly well.

How can I share my reqwest client to more tasks to achieve higher bandwidth and hide latency ?

1 Like

To share a reqwest Client, you can call .clone() on it. Each clone will share the inner connection pool.

To add an upper limit on the number of concurrent requests, you can include a Semaphore with the Client, e.g. like this:

use std::sync::Arc;

// This struct can be cloned to share it!
#[derive(Clone)]
struct MyClient {
    client: reqwest::Client,
    semaphore: Arc<tokio::sync::Semaphore>,
}

Then acquire a permit from the semaphore before sending a request, and release the permit when you are done.

let permit = client.semaphore.acquire().await.unwrap();

// use client to perform get request

drop(permit);
3 Likes

Great !
And how should I configure the semaphore size ?

The initial amount of permits is provided as an argument to its constructor.

Also be aware that hyper/reqwest will happily run out of system connections if you're not careful.

What happens is that the client has a connection pool but idle connections are not necessarily closed immediately. Even with only a single concurrent connection, when you try to reuse the same connection pool to access thousands of different domains, the pool might keep connections open since there is no max number of connections per pool limit. Some details: https://github.com/hyperium/hyper/issues/2420

Our workaround is to set the timeout and number of idle connections to zero. Less efficient but no leaks.

If you're only accessing a limited number of domains, you should be fine but it might still be worth checking your system resources.

3 Likes

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.