Ownership in event driven app

Hi all,

I have QT UI app (using qml-rust). It scans radio spectrum and produces charts.
The scanning process can last for several minutes and I want to repaint the chart every couple of seconds.
I can not figure out how do I go about my measures vector ownership. As the app is driven by user pressing a button, usual scope driven ownership does not work because user can press "scan" button any moment.

But than problem becomes more interesting, because I want to access measures vector from 2 threads: scanning thread to append measures every 16ms and chart thread reading the measures every 2sec.

So my thinking is: I need top pass reference to measures to 2 threads, which means Rc or Arc. I want to prevent simultaneous vector appending and reading so I need a Mutex. So what I am looking at is "Arc<Mutex<Vec>>" ? Am I heading right direction?

1 Like

Arc<Mutex<Vec<_>>> is one possibility but as it involves mutating shared state, while safely it may block either thread.

Perhaps better choice could be sending the measurements over std::sync::mpsc::channel using Sender::send from the measurement thread. In the UI thread on timer just call Receiver::try_recv() in a loop while pushing the received measurements to an existing Vec owned by the UI component/logic. try_recv does not block. This might make it easy to update the UI at a faster rate without causing either thread to block because of synchronization.

By handling the results from either Sender::send or Receiver::try_recv you will also know for sender side if the receiver is no longer interested (receiver has been dropped), or on the receiver side if all senders have been dropped and no new messages can be received.

2 Likes

While I totally agree with the general design of moving the measurement data into the UI thread, rather than keeping it in shared mutable state, I'm not sure if a queue is the most effective data structure here due to the asymmetry between the update and readout frequency.

As of now, the update thread is called 125 times more often than the UI thread. If the graph update period were configurable by the user, or modified by some graph autoscale mechanism, this could become much worse. And because the standard queues only operate on the granularity of individual measurements, there would be no way in this design to make the thread synchronization transaction efficient by fetching all updates at once, instead of one by one.

I don't know if we have queues which allow fetching multiple data items at once on crates.io. If we don't, here's how I would go about doing this in a mutex based design:

  • Writer appends one measurement to a shared vector
  • Reader swaps the shared vector with an empty vector, then appends its contents to the private vector outside the mutex

Because vector swaps are cheap, the reader does not have as high of a risk of interfering with the writer as in the queue-based design. But on its side, the writer could block while holding the lock due to vector reallocation. If that ends up being problematic, you could preallocate enough storage in the vectors to fit the writer's needs, with some margin to account for timing jitter. 150 measurements would be a good choice here.

2 Likes

I did not think that the solution pointed out by you could be feasible so went and did some benchmarks on rustc 1.17.0-nightly (b1e31766d 2017-03-03) . Based on some quite unscientific benchmarks (chrome running in the background, little to no repetition), it turns out that Arc<Mutex<Vec<_>>> is a solid bet while varying:

  • size of measurement (tested with [f64; 1] and [f64; 32])
  • batch size (0.1s or 5 seconds fitting to 10 or 320 capacity vector)

With bigger batch sizes the difference can be an order of magnitude or more in favor of swapping vecs. With smaller batch sizes the gap becomes insignificant. So yeah, better to go with Arc<Mutex<Vec<_>>>.

I need to start remembering that mostly uncontented mutexes are not that expensive...

3 Likes