Why is std::time::Instant opaque?

What is it about instant that argues for it to be opaque but not SystemTime? I'm in a case where I need to do a few things with Instants - compare and pirint probably the two biggest things (I need to keep them ordered and printing is nice for debugging. But I need them to come from the monotonic clock (most of the time), and I need to be able to still use for timeouts.

Is Instant was clearer I could just convert to nanos since epoch everywhere, and the be done with it. I've written a lot of time code, and used quite a few libs that tried to solve it. The best ones I saw where ones that tried to keep the abstraction to a minimum. It really easy to go overboard with abstraction and have too many data and time types - rust is starting to do it now with Instant and SystemTime where you often want the fetaures of both. (I there is ever an instant to systime converted where both types are similar you made too many types).

I know there are arch differences but those can and are already papered over iin a few places. Why not just make instant be visible or keep it all in nanos (everything else converts to it pretty easily)

I started yet another time create because of these issues, and the only reason is because I need nanos and the clock source matters for different types of timer events I have to wake up. And I think we are all doing it for very similar reasons (even if the timing systems are a little different, the basic items are pretty similar.

Can we fix time so everybody doesn't need to keep re-implementing their own for minor things? We only need one Instant type and clock sources need to be more out in the open. They really aren't that different on most systems, and timespec suck to deal with.and manipulate.

What would it take to make these converge? I really really dont undersand why there is no nanoseconds_since_epoch on an Instant. It would make them so much easier to work with.

Instant is optimized for performance counters and other relative timings; it doesn't necessarily corospond with any "wall clock" time. e.g. on Unix it's documented as using

Clock that cannot be set and represents monotonic time since some unspecified starting point.

I believe it's often (but not guaranteed) to be relative to the start of your program.


Because, for example, on Windows Instant uses QueryPerformanceCounter function - Win32 apps | Microsoft Docs, which has no relation to any consistent epoch.


It's also immune to NTP adjustments and the like.


Not sure what you mean about ordering, Instant implements Ord. Did you need something else?

On a phone, so I can't check the Instant Debug format to see if it's helpful, but semantically it's only meaningful to talk about the duration to some other time, eg. program start or the time of format, so you could always debug format instant.elapsed() if you prefer.

(OT: is there a way to turn on a threaded view? I keep getting confused)

If we don't want to ajdust the epoch (appropriate), then why not just "nanoseconds" on it and store it as nanos from some undefined epoch then stop making it opaque. It would affect nothing, but people who needed the value and knew what they epoch was could use it. Doing a good timer system with the Instant is real pain since you can't calulate its bucket and instead every system has to make their own Instant to be useful.

QPC is TSC based too and linux and windows basically report a similar value - the difference is any scaling and epoch calculations they have both applied (and prob NTP slew, but we'll ignore that). So both instants can still be visible with useful information and both report in nanos they will just not have a shared or well defined epoch. That's entirely appropriate and useful.

Depends on what you could as an adjustment. the MONO clock is affected by NTP slew but not by jumps. The REALTIME clock (usde for SystemTime) is affected by both. MONO RAW is just a TSC read and slew (I think, MONO RAW isn't talked about too much and slow af)

I didn't realize it implemented Ord which is useful. It might get me to the buckets I need, but I'll still can't print out anything useful it seems.

I understand, I just don't agree. There is definitely a better way in that all instants can be nanosecond values but different systems (or runs even) would not necessarily share a common epoch. If linux kept the MONO clock_gettime call then it would, but it could also decide to use a raw RDTSC and that would be fine too.

The current way of scribbling over the struct so nobody can look inside doesn't remove the need to look inside. So every system that needs access to it basically redoes it.

I decided to just make instants keep nanos and be generic over clock sources (for c++ but coming to my rust mini-version) so they can't be mixed computationally too easily but can still be printed side by side and understood.

Dealing with time sucks, but if the Instant was simply nanos and it wasn't opaque it would be much more usable to everybody. It isn't perfect, but certainly better than it is now which seems like the worse of all worlds.

That's an undocumented implementation detail of the Windows on x86[_64]. You can extract fields of opaque types with some hacks, which happens to have meaningful value due to the implementation detail of the current version of the operating system and libraries, and rely on it as you can control the exact environment the code runs on. But the Rust stdlib cannot control the environment it runs on so it can't assume anything besides documented behavior.


Definte undocumentted because it is all over the windows docs. Its documented better than the linux clocks.

And what is weird idea that I'm not allowed to understant the value so Rust has to hide it even though it has no safety issues at all (remember this is a part of the code that literally panics from running on different systems and that is accepted.

I'll should be able to use ;/dev/rand for time and Rust shouldn't care. So why not QPC?

Beside, just map it all back to nanos - or are you of the opinion that Rust shouldn't pretend to understand what QPC returns or clock_gettime returns?

Is it not possible in your use case to store a static Instant at the start of the program (possibly alongside the corresponding SystemTime)? Taking Durations from a common Instant would sidestep most of these issues.


It would be slow and slightly off from the two "now"s being slightly different points in time (and those handful of nanos being off would probaby cause some ordering issues and display weirdness since all my calcuaed (eg, on the hour) would be a little off and print incorrectly.

If you care about "on the hour", why are you using Instant at all?

1 Like

How could it cause ordering issues? In this scenario, the initial SystemTime is solely for display purposes, and to get the current SystemTime, one uses Instant::now(). As a proof of concept (Rust Playground):

use once_cell::sync::Lazy;
use std::{
    time::{Duration, Instant, SystemTime},

static EPOCH: Lazy<(Instant, SystemTime)> = Lazy::new(|| (Instant::now(), SystemTime::now()));

fn now() -> Duration {
    let (epoch, _) = *EPOCH;

// Don't compare this directly with SystemTime::now()!
fn now_for_debug() -> SystemTime {
    let (_, start) = *EPOCH;
    start + now()

fn main() {
    println!("{:?} {:?}", now().as_nanos(), now_for_debug());
    println!("{:?} {:?}", now().as_nanos(), now_for_debug());

Keep in mind that the initial system time is inherently inaccurate due to clock skew and other factors. So a few nanoseconds of difference likely has little effect.

1 Like

This is confusing to me: do you mean that it's private? That's so Rust can change how it works internally. Basic API stability stuff.

I'm honestly a bit confused by what you want. Definitionally, QPC and the like don't have any useful absolute meaning, only when you compare two of them. The API follows that.

If you want "human meaningful clock face time" use SystemTime, if you want "actual time elapsed between two events" use instant.

Maybe it would be easier to suggest options if we knew more about what you're trying to do? It seems like maybe some sort of scheduling that covers both very short term and long term events?


Instant doesn't map to clock time. For example of that, consider the following program:

use std::thread::sleep;
use std::time::{Duration, Instant, SystemTime};

fn main() {
    let now = Instant::now();
    let system_time = SystemTime::now();
        "instant elapsed: {:?}, system time elapsed: {:?}",

While this program was sleeping I suspended the computer for around 30 seconds, and it did output something like this:

instant elapsed: 6.313174279s, system time elapsed: Ok(35.183987475s)

This is only or a C++ version I have that has more going on. My timers are a mix of th MONO clock and the REALTIME clock. Duration based are hadled by the mono clock and time based are handled by the REALTIME clock to allow for date time changes if needed. Eg, When somebody/thing messed with the clock, I dont want my send times all set for a few seconds to decide they failed, but I do want my timers set to fire at 4:30pm to pick up on that change. For historical resons, the repeating timers are done by the duration system for better or worse (I've wanted to rewrite that, but I'm not sure I have a better solution instead of just moving around the hard parts.)

And to make some old hack come bite me in teh ass harder, I used the ns to order some timeout after others between REALTIME and MONO (bad hack, I know). I can get rid of that, but if I want my expiring timers to order in the same way and hopefully not error out because it thinks they are too early maybe, possible, potentially? The slightly off nanos between the two clock adjustments will cause some timers to appear to be too early or maybe cross a bucket bounds and some other weirdness to happen - I'm not fully sure. It probably depends on how I handle the two timer event streams I have glued together (you can get the offset adjusments too, so that might be a solution, to that part but not the rest).

This might sound like "thats too complex for basic timers in a language" but it really is pretty trivial. I'm sure I could find a way around it, but I don't think it is really useful anyways. I would still need a way to print out instants better than some constantly changing countdown (I print a minimal representation of the min/sec/nanos to log for debugging. It is just so many hoops to do something so trivial.

Big deal Instants don't have the same epoch offset if you were to convert them to nanos and make observable (is that the right word?). Rust shouldn't be in the habbit of trying to protect me of getting the right value when memory safety isn't an issue. For a self-declared system langauge to hide details because you might misuse them just inappropriate. Whoever did the time interface before my suggestions did a far worse job than I could if I even tried. The current solution of keeping the OS-dependant formats causes panics for simple alebra when done on the wrong platform.

At last my idea doesn't crash.

Why do people jump to liking comments like this that are a little behind the level of the rest of the discourse? If the Rust community really cared about making people feel welcome, they would make people feel like they are being heard. A simplistic comment like this makes me doubt people are even bothering to try and understand others at times. The community is do defensive and reflexive - Rust does no wrong and is perfect (except for unsafe which we are gather more pitchfork for next time).

I know what the differences in the clocks are dude.

This all stared because I want to print out an Instant to debug. The I thought of just trying write the rest of the timer code too, and various hacks like trying to make a unix epoch instant all didn't pan out. And then I just got curious as to why Instant was opaque because it doesn't seem that bad to make it readable.

Possibly, but this is the wrong forum for that discussion. There are two official Rust fora:

  • This one, users.rust-lang.org (URLO), is intended to help people whose goal is to use or understand current Rust.
  • internals.rust-lang.org (IRLO), on the other hand, is the intended place for design discussions about the language and how it should change going forward.

From various topics you've posted, you obviously have strong opinions on how Rust can improve its design. You'll probably find discussions on IRLO more productive, as they have a much better chance of actually inspiring the language to change.

Intereting thing: before Linux started using clock mono and Intel still had per core TSC counters, I had to write Java code to do this and try to determine the offset of the TSC at a fixed frequency (we pegged it at boot). It took about a 500 lines of code and millions of loop interations and some asm for the RDTSC and CPUID instructions (we also calculated the cost of the RDTSC and System.nanoTime calls while at it). It was actually very educational and interesting to learn about clocks that way.

I have a problem model that rust hasn't had to adapt to yet at all, and most people are completely unaware how my field even works in terms of design or tradeoffs. Rust hasn't had to deal with the very latency sensitive applications yet (shaving micros and nanos). It is an under-represented part of the high-performance workload that rust sees (but seem to be brought up in the lit as a potential space for it to exist).

That's the basis for most of the disagreements. Rust has dealt with throughput mostly. It isn't that I'm disagreeable, just that rust hasn't really had to deal with latency at this level so many of it constructs seem like a fight from start to stop because of things like this.