Question about time

So, this was just a curious experiment I decided to conduct for fun.

I'm trying to figure out the absolute maximum number of microseconds and nanoseconds I can sleep for if I use the timestamp counter (TSC). I'm using the TSC code from the EDK II UefiCpuPkg timer implementation (by porting it to Rust and doing a bit of tweaks). Here's the resulting code I've come up with:

use std::arch::x86_64::__cpuid;

fn calculate_tsc_frequency() -> u64 {
    let res = unsafe { __cpuid(0x15) };
    let (eax, ebx, ecx) = (res.eax as u64, res.ebx as u64, res.ecx as u64);
    if eax == 0 || ebx == 0 {
        return 0;
    let core_freq = if ecx == 0 {
        // Fall back to bus-reference frequency
        let res = unsafe { __cpuid(0x16) };
        ((res.ecx as u64) & 0xFFFF) * 1000000
    } else {
    (core_freq * ebx) + (eax >> 1) / eax

fn main() {
    println!("TSC frequency: {} Hz", calculate_tsc_frequency());
    let f = calculate_tsc_frequency();
    println!("Finding overflow");
    print!("Microseconds... ");
    let mut incs = 0u64;
    for i in 0..u64::MAX {
        if i.saturating_mul(f) / 1000000 == u64::MAX / 1000000 {
        incs += 1;
    println!("{}", incs);
    incs = 0;
    print!("Nanoseconds... ");
    for i in 1..u64::MAX {
        if i.saturating_mul(f) / 1000000000 == u64::MAX / 1000000000 {
        incs += 1;
    println!("{}", incs);

The problem is this gives nonsensical results (it falls back to the bus-reference clock frequency on my processor, 24.2 Ghz if my bit shifts are correct) and claims that for both microseconds and nanoseconds the absolute maximum is around 762262152. However, this doesn't make any sense because Linux is definitely able to track far longer than that in terms of nanoseconds and microseconds, and it probably uses the TSC. Am I just using the wrong formula? Are my bit manipulations wrong? (According to Intel, bits 0:15 of register ECX of CPUID leaf 0x16 contain the bus-reference clock frequency in Mhz; Bits 16:31 are reserved.)

I think you meant

-    (core_freq * ebx) + (eax >> 1) / eax
+    ((core_freq * ebx) + (eax >> 1)) / eax

Yeah, I did. Didn't quite solve the problem though. Its now up to 1524524304 microseconds and 1524524303 nanoseconds, but that's still nonsensical. The math adds up, though. I feel like I'm doing something wrong somewhere.
Edit: Or maybe that's right. That's only about 25.41 minutes/1.525 seconds. But I can't imagine an OS -- when it wants to wait 15 billion nanoseconds -- repeatedly executes sleep like that. That doesn't make any sense.

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.