We are in the following scenario: we want to give a user the way to control the parallelism of a method. The method will break a problem in n pieces and spawn exactly n rayon threads.
This n will be obtained by current_num_threads, so the user can install, if they want, the method in a specific thread pool.
The problem is that we have a thread gathering the result. When using the current thread pool, we have an additional thread—the main thread—that does that. But if we run under an install, the main thread is part of the pool. This means that we have to spawn one less thread.
To check that, one checks that rayon::current_thread_index doesn't return None.
This is... ugly at best. The user must install manually one more thread with respect to the desired parallelism.
Is there any idiomatic way to have a more ergonomic approach?
UPDATE: we dug a little bit more into our issue. Our rayon threads push messages in a CrossBeam channel, and our main thread empties the iterator associated with the channel.
Now, the channel calls std::thread::yield_now, which is the reason why we "miss" one thread when we under install. It should call rayon::yield_now.
We solved with something like
/// An iterator over items received from a channel that uses
/// rayon's yield_now to avoid blocking worker threads.
struct ChannelIter<T> {
rx: std::sync::mpsc::Receiver<T>,
}
impl<T> Iterator for ChannelIter<T> {
type Item = T;
fn next(&mut self) -> Option<Self::Item> {
loop {
match self.rx.try_recv() {
Ok(item) => return Some(item),
Err(std::sync::mpsc::TryRecvError::Disconnected) => return None,
Err(std::sync::mpsc::TryRecvError::Empty) => {
rayon::yield_now();
}
}
}
}
}
but we still wonder if there's a ready-made/more idiomatic solution.