So, this was just a curious experiment I decided to conduct for fun.
I'm trying to figure out the absolute maximum number of microseconds and nanoseconds I can sleep for if I use the timestamp counter (TSC). I'm using the TSC code from the EDK II UefiCpuPkg timer implementation (by porting it to Rust and doing a bit of tweaks). Here's the resulting code I've come up with:
use std::arch::x86_64::__cpuid;
#[inline]
fn calculate_tsc_frequency() -> u64 {
let res = unsafe { __cpuid(0x15) };
let (eax, ebx, ecx) = (res.eax as u64, res.ebx as u64, res.ecx as u64);
if eax == 0 || ebx == 0 {
return 0;
}
let core_freq = if ecx == 0 {
// Fall back to bus-reference frequency
let res = unsafe { __cpuid(0x16) };
((res.ecx as u64) & 0xFFFF) * 1000000
} else {
ecx
};
(core_freq * ebx) + (eax >> 1) / eax
}
fn main() {
println!("TSC frequency: {} Hz", calculate_tsc_frequency());
let f = calculate_tsc_frequency();
println!("Finding overflow");
print!("Microseconds... ");
let mut incs = 0u64;
for i in 0..u64::MAX {
if i.saturating_mul(f) / 1000000 == u64::MAX / 1000000 {
break;
}
incs += 1;
}
println!("{}", incs);
incs = 0;
print!("Nanoseconds... ");
for i in 1..u64::MAX {
if i.saturating_mul(f) / 1000000000 == u64::MAX / 1000000000 {
break;
}
incs += 1;
}
println!("{}", incs);
}
The problem is this gives nonsensical results (it falls back to the bus-reference clock frequency on my processor, 24.2 Ghz if my bit shifts are correct) and claims that for both microseconds and nanoseconds the absolute maximum is around 762262152. However, this doesn't make any sense because Linux is definitely able to track far longer than that in terms of nanoseconds and microseconds, and it probably uses the TSC. Am I just using the wrong formula? Are my bit manipulations wrong? (According to Intel, bits 0:15 of register ECX of CPUID leaf 0x16 contain the bus-reference clock frequency in Mhz; Bits 16:31 are reserved.)