How to use thread pool and object pool together?

I need a thread pool to execute 10K tasks, and all tasks need identical "napkin" data structure, which is expensive to initialize.

I tried using thread pool & object pool together, but it seems they can't live in one struct: when I retrieve the object from the pool, it can't be given to a thread, because it's a reference.

What's the workaround? Should I keep object pool in the caller code instead?

use threadpool::ThreadPool;
use std::{error::Error, ops::DerefMut};
use object_pool::Pool;

type Res<T> = Result<T, Box<dyn Error>>;
struct Ram { data: Vec<u8> }
struct Cpu<'a> { ram: &'a mut Ram } // in the real app, this contains immutably referenced data

impl<'a> Cpu<'a> {
	pub fn do_calc(&mut self, data: u8) -> Res<usize> {
		// main working code is here
		todo!()
	}
}

enum Dispatcher {
	Single(Ram),
	Multi { pool: ThreadPool, rams: Pool<Ram> }
}

impl Dispatcher {
	pub fn do_calc(&mut self, data: u8) -> Res<usize> {
		match self {
			Self::Single(ram) => {
				let mut cpu = Cpu { ram };
				return cpu.do_calc(data)
			},
			Self::Multi { ref pool, ref mut rams } => {
				pool.execute(|| {
					let ram = rams.try_pull().unwrap().deref_mut();
					let mut cpu = Cpu { ram };
				});
				// idk how to extract the results here, whatever
				todo!()
			}
		}
	}
}

fn main() {
	let threads = 1usize;
	let mut disp = if threads == 1 {
		Dispatcher::Single(Ram { data: vec![1, 2, 3, 4, 5] })
	} else {
		let pool = ThreadPool::new(8);
		let rams = object_pool::Pool::new(threads + 1, || Ram { data: vec![] });
		Dispatcher::Multi { pool, rams }
	};		
	println!("{}", disp.do_calc(456).unwrap());
}

Things you could possibly do:

  • In theory you could iterate through all of the rams, send them to the thread pool to execute, then wait on the thread pool to finish all of the executions. I think it would work, even though the Cpu is holding onto a mutable reference, since in theory you're not mutating the rams container, just the item within it.
  • If that doesn't work, then you could drain the object pool, move each owned ram to a thread, and extract them back out (the thread could send the ram back through a channel).
1 Like

I've figured out, there's an intermediary, wrapper object, that must not be dropped to use the pooled ram object.

The code desugars to this:

let wrapper: Reusable<_, Ram> = objects_pool.try_pull().unwrap();
let ram = wrapper.deref_mut();

wrapper can't be dropped here. So this way, I'll have to also store the Reusable-s in a vec.

Regarding the second option, I think, with this level of complexity, I probably may implement my own set of threads with read loop and channels, and send jobs as an enum type. At least this will be more straightforward to read.

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.