MIDI Clock / Timing

Any thoughts on implementing a MIDI clock? (The standard is 24 ticks per quarter note / beat)

I haven't fooled around much with 'accurate' events or event loops before. Here is my initial thought, where the "bang" would be a 'tick' of the midi clock (here, ticking fairly slowly.)

    let mut current_time = std::time::SystemTime::now();
    loop {
        if current_time.elapsed().unwrap().as_nanos() >= 350000000 {
            println!("bang");
            current_time = std::time::SystemTime::now();
        }
    }
}

This may not be a Rust specific question, but rather one about accurate timing of events in programming real-time applications. Perhaps I'm getting in over my head. :scream:

That approach is not good because the loop will consume CPU time, even when waiting. You don't really put the CPU to sleep.

(Edit: My previous approach was flawed. It didn't work as I expected.) Here is a correct one:

use std::time::{Duration, SystemTime};
use std::thread::sleep;

fn main() {
    let mut time = SystemTime::now();
    for _ in 0..10 {
        println!("bang");
        time += Duration::from_millis(350);
        if let Ok(sleeptime) = time.duration_since(SystemTime::now()) {
            sleep(sleeptime);
        }
    }
}

(Playground)


Note that there are several possible strategies regarding missed/delayed ticks. See tokio::time::MissedTickBehavior, for how tokio handles this.

If Tokio's clock isn't precise enough, then you can use a timerfd.

That only works on Linux though?

Why is this giving output like this? I understand that the if block isn't being executed, but...why?

pub fn time() {
    let mut time = SystemTime::now();
    for x in 0..10 {
        println!("bang {}", x);
        time += Duration::from_millis(500);
        if let Ok(sleeptime) = time.duration_since(SystemTime::now()) {
            println!("sleeptime: {}", sleeptime.as_millis());
            sleep(sleeptime);
        }
    }
}
bang 0
sleeptime: 499
bang 1
sleeptime: 138
bang 2
bang 3
sleeptime: 448
bang 4
sleeptime: 258
bang 5
sleeptime: 58
bang 6
bang 7
sleeptime: 358
bang 8
sleeptime: 158
bang 9

I think the process runs so slow that the sleeptime would have to be negative (in which case duration_since returns an Err so that the if clause doesn't match).

Try to make the time greater than 500 milliseconds or run the program on a faster system. (Strange that your system is so slow though?)

For something like audio, you don't really want to use std::thread::sleep() for precise timing.

A call to std::thread::sleep() will put the current thread to sleep for no less than the requested duration, but there's no guarantee it'll sleep for exactly that duration. It's quite common to multitask by giving each process/thread an amount of time that they can run for before being pre-empted (e.g. imagine letting process 1 run for 50 ms, then switching to process 2 for 50 ms, and so on).

Each call to sleep() will give up your "time slice" and you'll need to wait for the next window before starting again. If everything is going well and your computer isn't working hard, you'll probably sleep for the requested time plus/minus a couple milliseconds, but once your computer starts getting loaded up, you might emit a beat late because your code needs to wait for a chance to run again.

I believe audio applications use dedicated threads and some sort of timer with much tighter latency guarantees, typically triggered directly from a hardware interrupt. Sorry I don't have any helpful links for you - I know how to do this sort of thing on a microcontroller, but I've never needed to do it on an OS like Linux.

2 Likes

Another alternative (or rather: variety) I could think of is to use async Rust, perhaps with a custom/specialized executor for the sort of application being written.

On Linux (and other desktop OSes), you have buffering, and the buffers have some form of timing.

For PCM, where the sample rate sets the timing, you simply know that each sample is 1/Fs time units after the previous one, and thus if your sample rate is 8 kHz, each sample is 125 µs after the previous one. So if you want 1 second of audio, you output 8,000 samples, and you're done.

For MIDI, you're dealing with MIDI events, and the OS will define a clock for you, and you timestamp each event against that clock. So, for example, if you're sending to a MIDI synthesizer, you set the MIDI clock time at which each event should go out, and the OS takes care of sending down to the MIDI device at the right time for you. You are responsible for sorting events by time, and not sending events too close together.

In both cases, the buffer the OS lets you fill is of limited size - once you've filled a buffer, you have to send it to the OS to deliver to hardware at the right time, and the OS limits the number of buffers you can have queued to be delivered to hardware, plus the size of each buffer.

You then get into fun with dedicated threads if you want low latency (small buffers, not many in flight). If you're just playing back a MIDI file, you can use huge buffers and have many ready to go (the entire file, for example), and rely on the OS letting you have (say) 32 buffers of 60 seconds each in flight, for a 32 minute playback time queued in the OS. If you're reading MIDI events and generating output audio, though, you may want no more than 5 ms to elapse between event arriving at hardware, and output audio leaving the speaker connector, in which case you need to deal with real-time threads, and have (say) 4 buffers of 1ms each in flight, getting the OS to tell you every time a buffer is emptied by hardware.

2 Likes

Any thoughts on how one would implement this timing system? Or sources to look into?

You depend on the MIDI API you're using - that'll have a timing reference API of its own, and you need to comply with that API. For example, with the ALSA sequencer API, you'd use schedule_tick to schedule events against ticks directly.

Generally, though, when working on a desktop/mobile OS, you wouldn't try to wait until the "bang" time - you'd keep track of MIDI events against MIDI ticks in a suitable data structure, and then when you're sending events, you tell the OS which tick to schedule the event against.

1 Like

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.