Short circuiting Option.or() in parallel

I have a recursive function that looks something like this:

fn inner(/*some arguments*/) -> Option<Solution> {
    //do stuff...

    //last line
    inner(/*arguments*/).or(inner(/*different arguments*/))

All of inner's arguments are either cloned or immutable.
What I want is to run the last expression in parallel, with it returning and killing the other threads when it finds a Some value.
Is something like this possible without much overhead?

Spawning threads for each execution of inner sounds like a massive overhead if it is executed more often than you have available cores on your machine, which I find highly likely. So I'd definitely think spawning tasks in a thread pool, rather than spawning threads for each call to inner. That would be possible using rayon::ThreadPool. Unfortunately I don't see a possibility to kill spawned tasks in rayon, as there's no handle returned when you spawn a task onto a ThreadPool. I actually don't know how (it it is even possible) to kill a thread spawned using the standard library, tbh.


One alternative might be to use async, which has APIs for this:


This kind of recursive pattern is what rayon::join is made for, and in fact the higher-level parallel iterators are built on that pattern too. The short-circuiting behavior is harder though, because we have no built-in means to cancel any execution in progress. You can do it with a side channel of some sort, like a shared AtomicBool that you can set when you're "done", and poll that at key moments in your work to escape and return None.

If you can find a way to restructure your computation as a parallel iterator, either with methods like par_iter() or a manual split, then try_reduce does have short-circuit behavior. That basically just uses an AtomicBool like I described above.


Thanks, ended up using rayon::join and the stop bool with pretty much zero overhead.
Can't really make it an iterator so this is probably the best it can get.

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.