Infinity loop in tokio::runtime::current_thread::Runtime::run()

How to close the spawn()?

extern crate reqwest;
extern crate tokio;

use tokio::{prelude::*, runtime::current_thread::Runtime};

fn main() {
    let mut rt = Runtime::new().unwrap();

    for _ in 0..10 {
        rt.spawn(
            reqwest::r#async::Client::new()
                .get("https://example.com")
                .send()
                .and_then(|mut resp| resp.text())
                .map_err(|e| println!("{:?}", e))
                .map(|text| println!("{}", text)),
        );
    }

    let _ = rt.run();
    
    unreachable!();
}

if I return a future::ok(()) in spawn(), the function will return immediately and I got nothing.

I pasted your code into a new project and it successfully completed in less than a second.

I used stable rust with these dependencies:

[dependencies]
tokio = "*"
reqwest = "*"

Sry, try this:

fn main() {
    let client = reqwest::r#async::ClientBuilder::new()
        .timeout(std::time::Duration::from_secs(5))
        .build()
        .unwrap();

    let mut rt = Runtime::new().unwrap();

    for _ in 0..10 {
        rt.spawn(
            client
                .get("https://example.com")
                .send()
                .and_then(|mut resp| resp.text())
                .map_err(|e| println!("{:?}", e))
                .map(|text| println!("{}", text)),
        );
    }

    let _ = rt.run();

    unreachable!();
}

I've previously had issues with infinite loops when using hyper. Usually the reason was that if the Client isn't dropped, it would never finish because of some task internal to the client.

Simply add drop(client) just before rt.run

That's weird. Just because use the ClientBuilder::new().build().unwrap() instead of the Client.

In the first example it was because you built the client inside the loop, so it was dropped at the end of the loop iteration.

Oh, you're right.

And if I have 1000 requests. Before the 1000th one, there're still 999 connections alive? Is there a more elegant way to do this?

Note that hyper keeps some background futures and pool of connections, so a better option is to do block_on to guarantee that you finish once your future is done

Take a look at this thing I wrote a few months ago: gist. It uses hyper instead of reqwest but the principle is the same.

This is a good point for the usual runtime, but since @AurevoirXavier is using the current thread runtime, the run method is correct.

Yes, at the beginning I do request with block_on . But I don't know how to block_on 10 requests one time.

AFAIK even current_thread::Runtim::run should wait for all spawned futures to finish, or did you mean tokio::runtime::current_thread::run?

Consider to look at join utilities in futures crate

Yes I mean tokio::runtime::current_thread::run, since that's the one they're calling in the code snippets.

I'd recommend not using join in this case. The problem is that if you use join, every future is polled every time one of them needs to be. If they are separate tasks, only the ones that currently need to are polled.

1 Like

This one async_multiple_requests? I have already checked this.

That means if I have 1000 requests so I have to join 1000 times?

That's also a good reason not to use join.

Typically I only recommend using join if you are creating a future that depends on the result of every joined future. In this case, you are not waiting for the requests inside another future, so I don't think it is a good idea here.

Note that if you need to join a large number of futures, you should be using FuturesUnordered.

Thanks, learned a lot from you.

There is option to use join_all from any Iterator. but that might be only useful if you need to get result of all futures together (as most likely you'd have to use Vec to store it)

But as @alice mentioned it might not be always the good because join bundles all futures together (which might have negative impact on performance)

So ideally you might want to spawn each and every request, and then use another future to exit once they are finished.
I.e. you spawn all requests on runtime, and then call block_on or run with future that exits once you're finished.
This way you have maximum control, but you'd need to implement logic to control exit

Dropping client might not always be desirable in real cases after all

1 Like

Okay, I'll try with another future.

Thank you.