I have a need for classic Dijkstra P and V functions, to allow not more than N threads to use a resource. This needs to be fair, to prevent infinite overtaking. (Standard mutexex are not fair, but parking_lot mutexes are.) Someone must have written and debugged this. I should not have to re-invent the wheel here.
Classic P and V implementation, non-Rust.
(Use case: the Linux scheduler is not quick enough that I can let my background threads run compute-bound without seeing frame rate glitches. I tried. So I need to throttle the compute-bound activities of the background threads down to CPU count - 2.)
I'm not familiar with P and V, but if CPU is the only resource you're concerned about, would it work if you made your compute-heavy threads go to sleep for a short bit after completing some small unit of work?
Without preemptive scheduling in user space, I'm not sure how you would get anything more effective than the Linux scheduler. No matter what, a thread has to put itself to sleep.
Before you try anything too technical, I'd try playing around with the scheduling heuristics that you can assign to threads (if you haven't already)
These are usually called "nice" IIRC
I've tried lowering the thread priority for the lower-priority threads, but Linux doesn't take control away from a low-priority thread instantly when a high priority thread wakes up. So I don't want to load things up to 100% CPU utilization, or frame rate will suffer.
Now, in an operating system with a CPU dispatcher that's serious about priorities, such as QNX, there wouldn't be a problem. But that's real-time technology.
I have a need for classic Dijkstra P and V functions
I'm not familiar with the naming. They're counting semaphores? The standard library doesn't have one but this can probably be handrolled with a counter and a Condvar (or parking_lot's). If not, there are crates that offer semaphores but I don't know if they're fair.
The Linux scheduler is not quick enough that I can let my background threads run compute-bound without seeing frame rate glitches. I tried. So I need to throttle the compute-bound activities of the background threads down to CPU count - 2.
Things you could try
- submit compute work to a thread pool sized to core-count - 2
- use CPU affinities to isolate graphics and compute threads to disjoint sets of cores.
- linux kernel compiled with a higher tick rate and/or with the RT patchset. Those are recommended for soft-realtime uses.
I've tried lowering the thread priority for the lower-priority threads, but Linux doesn't take control away from a low-priority thread instantly when a high priority thread wakes up.
Maybe using the deadline scheduler policy instead of SCHED_OTHER to have the graphics thread run on a fixed schedule ahead of frame submit time would help? It seems like a good tool for that task.
The Tokio semaphore provides this for the async use-case. Unfortunately, I'm not aware of any synchronous implementation of a fair semaphore.
You inspired me to write this, inspired by Tokio's async implementation: playground
Without changes to how the scheduler operates (or using real-time extensions like @the8472 suggested)
this isn't a problem that you can solve in user space. So unless I'm misunderstanding, the mutex route isn't the right way to go.
If your low priority threads aren't going to sleep often enough, then have them put themselves to sleep at a small interval
Perhaps one option could be to rely on a PI-aware futex in some way?
Yes, if I have to build something, I'll build it on top of parking_lot fair mutexes.
(I wish parking_lot had poisoning mutexes, like the regular ones in std::sync. Consistent semantics would help. Unlocking the mutex on a panic is not a good choice. One thread panics, then others advance into unknown territory before all the threads get shut down.)
Here's an implementation. Rust Playground
The general idea is that you can only have "capacity" threads within the P to V zone.
This needs a scoped structure around it, of course.
A more Rust-oriented form, with a guard type that releases the resource when dropped.
This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.