Multithreading, handling packets, vec![], and allocation

Suppose the following:

  1. we are running on a machine with 128 threads
  2. we have a function called on every udp packet
  3. this function calls vec![] twice (per packet)

Did we just destroy our parallelism because vec![] involves talking to the glolbal memory allocator? Now, two common 'obvious answers' would be:

  1. don't allocate
  2. use a custom allocator

for (5), what allocator do you have in mind? for (4) if we were to pre allocate a bunc of vec![]; how do we know how many and how to distribute them in a thread safe manner

Not really. Everything before and after the "malloc" would still be parallellizable.

Why do you allocate twice per function? Can it be avoided or worked around?

Many allocators use local caches to avoid contention (jemalloc or mimalloc, both of which I think can be used in Rust). If you're using the default (system) allocator then it'll depend on the platform you're running on. I know the default allocator on ubuntu 14.04 was poor for long uptime heavily multithreaded systems but good for short lived single threaded ones (most cli tools). The newer default allocator is supposed to be better in general but I still used custom (jemalloc but mimalloc was on my to test list), to avoid performance surprises when changing platforms

1 Like

I am using 1 allow just for allocating the return packet data.

Assuming the Vecs are only temporary then one way to manually reuse buffers is to store them in a thread_local RefCell. With that pattern I would recommend a thread pool (Rayon or something else). Since they'll be allocated for every thread that touches the buffer until that thread goes away. Also the buffer is only reused if the same thread goes through the same function (pretty easy with a thread pool)

1 Like

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.