Concurrent Linked list

I have singly-linked linked list, where I try to implement effective concurrency by wrapping each node pointer in RwLock. I was able to create all needed functionality for reading. For writing I encounter some problems with lifetimes that I am not able to solve.

Here is small snippet of code, that is not working

use std::sync::{Arc, RwLock, RwLockWriteGuard};
type Link<T> = Arc<RwLock<Option<Arc<Node<T>>>>>;

struct Node<T> {
    elem: Arc<T>,
    next: Link<T>,

struct Foo {
    test: u8,

fn apply_changes(
    mut local_cursor: RwLockWriteGuard<Option<Arc<Node<Foo>>>>,
    updates: &Vec<Arc<Foo>>,
) {
    for update in updates.iter() {
        let Some(local_foo) = local_cursor.as_ref() else {
            // This means that we reached end of local list
            if update.test > 0 {
                let new_node = Arc::new(Node {
                    elem: update.clone(),
                    next: Arc::new(RwLock::new(None)),

                *local_cursor = Some(new_node);

                // I can get next link in writing mode
                let next_link = local_cursor.as_ref().unwrap().next.write().unwrap();

                // Can't move my current cursor to new position
                // local_cursor = next_link; // ! Does not compile !

fn main() {
    let head = Arc::new(RwLock::new(None));
    let updates = vec![Arc::new(Foo { test: 1 }), Arc::new(Foo { test: 2 })];

    apply_changes(head.write().unwrap(), &updates);

How can I overcome this problem, and maybe there are other suggestions for this linked list implementation?

I think this is something like this, where you need to keep a stack of guards around. This unchecked recursive version compiles, for example (but carries a risk of stack overflow even if correct).

But I admit I didn't probe too hard, because why implement a linked list?

1 Like

Being concurrent is actually a pretty decent reason for using a linked list.

However, that's for a lock-free use case using atomic pointers. If you're locking the links between nodes, you're essentially limiting use of the entire list behind the front node's lock[1]. In that case, you're likely just better off using a locked Vec.

  1. It's not immediately clear why this is necessarily the case, since each link has its own lock. The issue is that unless you clone each link's Arc, locking the next link borrows from and requires continuing to hold the lock on the prior node. Cloning the Arc of any given node allows you to release the lock on the link to it, but it isn't exactly helping your case for using a linked list. ↩ī¸Ž

I probably didn't express my self well enough, in loop where I iterate over updates, every time I moved my cursor to the next link, I need to lock next link and after that unlock current.

I chose linked list as my collection for several reasons:

  • Collection should support many inserts/remove in the middle (vector will work poorly as it will need to do a lot of copying)
  • Collection will be used by ~30 readers and ~5 writers at the same time
  • I don't need indexed addressing
  • All collection consumers every time they need data from collection will scan it from begging to the end

Design I implemented gives following advantages:

  • Insertions/deletions will work fast
  • If no writes are being made, all readers will have ability to read from collection without locking
  • In case writer need to apply change it will lock only that area that is being updated, for example if my writer thread is on 5th node, 5th link will be locked and readers will be able to read first 4 collection elements without blocking, they will block only if they need area from area that is being updated.

Modern hardware is often more efficient on large contiguous memory chunk. Even when inserting removing splicing moving large amounts in the middle.

Same thing. Less pointer chasing means better runtime behavior.

In a linked list, a pointer could point to anywhere in memory even somewhere as far as RAM, if you were using a simple array/Vec the pointee you want would probably be somewhere in one of the processor's caches. (looking up something from main memory takes about 200 cycles longer than from cache I think I remember that number, someone please correct me)


This, FWIW, is why BTreeSet and BTreeMap are often the right choices instead of a linked list. Because a B-tree stores multiple elements per tree node, iterating over the contents of the set or map doesn't involve as much pointer chasing, and Rayon already implements parallel iteration over these data structures (saving you from implementing your own parallel iteration.

1 Like

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.