In order to cut down on vector allocations, I would like to find something like Go's sync.Pool. Ideally I could create a static value using the lazy_static! macro and just get and put vectors when needed. I am okay with calling reset on the vectors when fetching them as this seems to be a common pattern.
So far I came across the pool crate which seemed ideal but it uses the Drop trait to put values back in whereas I would like explicit control. This is because the vector may be moved across functions many times.
Rust doesn't provide garbage collection, and the pool needs to know somehow you aren't using the value anymore so that it could return it again, if need a reference counted Checkout, you can use Rc<Checkout<T>> or Rc<RefCell<Checkout<T>>>.
Could you say more? I don't understand why the use of Drop prevents explicit control. You very much control when something is dropped.
In more restricted use cases where I would have used a sync.Pool in Go, I've reached for thread_local - Rust, which has very very tiny synchronization overhead.
Maybe I am misunderstanding when something will be dropped. I am attempting to write a library that does zero copy logging by reusing the buffers in this pool. Here are a few snippets of the API I have now:
lazy_static! {
static ref BUFFER_POOL: Mutex<Pool<Vec<u8>>> = {
let p = Mutex::new(Pool::with_capacity(DEFAULT_POOL_CAPACITY, 0, || {
Vec::with_capacity(DEFAULT_BUFFER_SIZE)
}));
p
};
}
Then when making a new Entry which is the type that will encode fields looks like this:
pub fn new(level: Option<Level>, encoder: E) -> Entry<E> {
let mut buffer = vec![];
encoder.append_start(&mut buffer);
// TODO: Make a lookup map for the levels or just do `ToString`?
if let Some(_) = level {
encoder.append_string(&mut buffer, "level", "info");
}
Entry {
encoder: encoder,
buffer: buffer,
}
}
Right now I just initialize a new Vec thinking that if I grabbed one out of the pool, it would be dropped at the end of this function. But perhaps I am thinking about it in the wrong way. Hope this helps clear things up.
The Vec is moved to the Entry here and not dropped. It’ll only be dropped when nothing is using it anymore, which is when your Entry dies (assuming it doesn’t move the Vec elsewhere beforehand).
The allocator is going to need a bunch of bookkeeping code as well as calls into it not being inlined. I would be shocked if an allocator would be same speed as essentially a Vec<Vec<u8>> pool.