Tokio select without knowing how many blocks

I am creating a server that will fork a few (<10?) processes and talk to each of them over its stdin/stdout. Tokio select works great if I know ahead of time how many clients I'll have, but I don't. The crossbeam select macro is a wrapper around a Select struct that allows you to add things to what select looks at, but it only works for channels. Is there something similar for Tokio?

There is FuturesUnordered from futures which works with any executor.

FuturesUnordered in futures::stream::futures_unordered - Rust

I've been playing with FuturesUnordered, but there's something I can't figure out. When I await on the FuturesUnordered variable, I get the value being returned by one of the futures it contains. I'd also like to know which future the output is from. Is that possible?

would it work to have your futures return an id as part of their result? e.g. (future_id, output_value) ?

The clients are untrusted. I could make the future_id unguessable, but then I'd have to worry about collusion between clients. I'd like to use the communication channel to reliably identify which one I'm talking to, which is something I could do with the tokio select macro if I knew ahead of time how many clients I would have.

I have an idea of how to do what I want, but it's more complicated than just getting the id from FuturesUnordered.

Why would the client need to be involved in the identification process? You may just wrap the futures you put into the FuturesUnordered with a future that tracks an ID.

let future_id = todo!();
futures_unordered.push(async move {
    let output_value = client.recv().await;
    (future_id, output_value)

Along the lines I was planning to try but far more elegant than what I had in mind.

I ran into a problem with your approach. I get a type error when I try to push a second one into futures_ordered. I presume that's because async move {} is a closure, and each closure has a different type. I ended up using an async function instead of that block. It's working fine after that change.

Yes, that's exactly why.

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.