Waiting for destructors to execute on Tokio rt

After using Tokio's ^1.0's Runtime::block_on, I'd like to be able to wait for destructors to run before quitting the program. I can run shutdown_background after block_on finishes, but this does not appear to cause the desired result

You mean the destructor of things inside the call to block_on? Can you say a bit more about the structure of your code?

let rt = Builder::new_multi_thread().enable_time().enable_io().build().unwrap();

rt.block_on(async move {
    a_complex_nested_future.await?;
    Ok(())
});

rt.shutdown_background();

The internal architecture is pretty complex from this high of a level. There are joined and selected futures internally, especially depending on the number of client sessions running

Note: This is for tests. I want to be able to see the state of an object before it drops

Once the call to block_on has returned, all destructors inside it have already finished running, so if that's your structure, you don't need to do anything else. If you want to wait for destructors in spawned tasks, then shut it down with

drop(rt);

When you use shutdown_background(), you are asking for it to not wait for destructors.

Okay, something interesting is happening. The a_complex_nested_future is really a tri-joined future where each future is a JoinHandle of a future spawned on the runtime. When this tri-joined future ends, I don't get any drop output from one of the deeply internal structures. However, when I instead tri-join the plain futures (i.e., non-spawned futures), I do get one of the futures to properly emit log::info! on drop

How did you reach the shutdown call if the block_on is waiting for the JoinHandle, and the shutdown call is after the block_on? The join handle wont complete unless the task completes, at which point it is dropped.

#[test]
fn main() {
let rt = Builder::new_multi_thread().enable_time().enable_io().build().unwrap();
let handle = rt.handle().clone();

rt.block_on(async move {
            let client0_future = handle.spawn(tokio::time::timeout(Duration::from_millis(TIMEOUT_CNT_MS as u64), client0_executor.execute()));
            let client1_future = handle.spawn(tokio::time::timeout(Duration::from_millis(TIMEOUT_CNT_MS as u64), client1_executor.execute()));
            let client2_future = handle.spawn(tokio::time::timeout(Duration::from_millis(TIMEOUT_CNT_MS as u64), client2_executor.execute()));

            let server_future = handle.spawn(server_executor.execute());
            tokio::time::sleep(Duration::from_millis(100)).await;

            //futures::future::try_join_all(vec![client0_future, client1_future]).await.map(|res|)
            tokio::try_join!(client0_future, client1_future, client2_future)?.map(|res0, res1, res2| flatten_err(res0).and(flatten_err(res1)).and(flatten_err(res2)))?;

            log::info!("Ending test (client(s) done) ...");

            // Give time for the server to stop now that the clients are done
            let _ = tokio::time::timeout(Duration::from_millis(100), server_future).await;

            log::info!("Execution time: {}ms", init.elapsed().as_millis());

            Ok(()) as Result<(), Box<dyn Error>>
}).unwrap();

std::mem::drop(rt); //explicit drop
}

Note: Inside client0_future and friends, the client sessions are spawned via tokio::task::spawn or tokio::task::spawn_local, depending on the context selected by the user (program uses both single-threaded and multi-threaded modes like Nginx). In this case, would the destructors not get called because the inner spawned futures don't need to finish in order for the parent future client0_future to finish?

Yes, that was the error. I had to add logic to wait for the sessions to end cleanly