High accuracy timer

How in Rust to measure time with high accuracy above million ticks/second like std::chrono in C++? Result in ticks and in nanoseconds/microseconds.

1 Like

You can use std::time::Instant for a monotonically increasing (steady) clock, or std::time::SystemTime for a time suitable for communication purposes.

Either one produces a std::time::Duration when compared, which has a resolution of one nanosecond. (though I wouldn't know whether e.g. all operating systems give a nanosecond-precise system time)

1 Like

If you need to measure the timings of expressions, have a look at the tempus_fugit crate.
It has a really simple API and the accuracy is down to the nanosecond (or courser if the system you're running on only supports that).

Getting it back into nanoseconds is the hard part but believe the smallest resolution you can get is architecture intrinsics using the time stamp counter.

Docs: _rdtsc in core::arch::x86_64 - Rust

I have using rdtsc in my old programs a lto of time ago, but when I bought processor with more than one cores, rdtsc caused problems - each core had own counter, this problems are resolved in modern CPU and OSes?

In general, use of RDTSC is full of pitfalls, especially when one needs to support many different CPU generations. Unless you need very-low-overhead timing, it's generally better to stick with OS APIs that use the TSC internally, but do their best to work around the hardware-specific problems. On Linux, a good default choice would be clock_gettime() in CLOCK_MONOTONIC mode.

I write test rdtsc method

fn test_mono() {
    unsafe {
        let mut n:i64 = 0i64;
        loop {
            let new = core::arch::x86_64::_rdtsc();
            if (new<=n) {
                println!("problem {} {}", n, new);
            }
            n = new;
        }
    }
}

and on Linux Mint 19 are no problems

You’re likely on a cpu with invariant TSC (I think >= Nehalem). It’s also possible that your task wasn’t migrated by the kernel to any other core nor did the power state of the core change while running your code, which makes invariant TSC not even an issue.

As mentioned by others, there’s subtlety to TSC usage, which is fine but you need to make sure those subtleties are mitigated/accounted for in your environment/use cases. If you’re going to target only bare metal Intel chips with invariant TSC and have an OS that will sync TSC across sockets on boot (or you’re carefully pinning to specific cores on a single socket), then you’re probably ok to use it.

3 Likes

problems are resolved in modern CPU and OSes?

I'd guess it's likely worse on modern chips due to optmisations.

It really depends on your use case, the other comments are spot on, rdtsc is subject to many factors and not consistent time-wise despite the name. If you are only wanting ticks it's as low as you can go, but to convert it back into nanoseconds is hard to do.

Put a heavy computation in the middle of that loop on a desktop and use it normally, you'll see time stamp counter can fluctuate wildly. On a server under constant load you'll get more consistent readings.

Found this thread in intel forum.

std::time::Instant uses on:

  • most unices: clock_gettime(CLOCK_MONOTONIC, ...)
  • MacOS: mach_absolute_time
  • Windows: QueryPerformanceCounter

Probably it's the most precise way to measure time with standard system facilities, without resorting to unreliable CPU-specific hacks. See module for implementation details.

3 Likes

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.