U64 for unique id

I admit this is independent of Rust.

I am writing a simple Entity-Component-System in Rust. I am trying to figure out what type to use for the 'Entity."

Now, by my math:


This means that if I have a thread threaded 3 Ghz machine doing nothing but generating new IDs in sequence, it would take me 194 years to exhaust all u64 ids?

(The importance of this question is that it guides whether I use u64 or u128, and whether I try to 'reuse ids' or just run them in sequence and not care.)

The question is:

  1. is the math above correct
  2. is it sufficient to use u64 for ids if we are incrementing in sequence

Since you are writing a ECS system have you considered using the specs crate.

Your math looks right.
For all normal use-cases of a simple ECS system, u64 would be just fine.

ECS IDs are indexes into an array. If you don't reuse indices and keep growing the array, you'd exhaust your RAM before you run out of index space, no?

@RustyYato :

  1. Yes, I've spent 1-2 days playing with the rust/specs crate.

  2. What I'm building is more of "EC" than "ECS".

  3. I also want the list of components for a "table" to be statically defined at compile time (better auto completion, more compile time checks), rather than specs' system of creating world then adding arbitrary components at runtime.

@marcianx : Not if you use a level of indirection like "btree dense vec"

where you have:

idx: btreemap<u64, usize>; // maps entity-id to location in vec
data: Vec<T>; // stores actual data 

Ok, cool! Look into the source code of the generational-arena crate as it has a nice way of reusing memory without wasting space. There is room for improvement, but it is a cool concept. That can be used to reuse indicies and make even more use of you're u64!

1 Like

Assuming it takes 1 CPU cycle to increment a register, and branch/jump instructions execute in 0 cycles.
Then it will take roughly 195 years for a single 3 GHz CPU core to cycle through a 64-bit integer. You also cannot physically store all 2^64 IDs in memory simultaneously, in any case.

But this is an unrealistic measurement, and won't really help you answer your question until you consider a few other questions: Do you need global uniqueness? Are there possibilities of ID collisions across domains? Is u64 too large/do you expect to have more than 2^32 items accessible at a time?

Does TypeId help you?

I'm confused what this buys me. TypeID source:

appears to just use a u64

#[derive(Clone, Copy, PartialEq, Eq, PartialOrd, Ord, Debug, Hash)]
#[stable(feature = "rust1", since = "1.0.0")]
pub struct TypeId {
    t: u64,

What I meant is instead of implementing your own thing, just use TypeId. They choose u64 for a good reason. As you said, it is almost impossible for creating so many types to overflow an u64, so why not going with that?

I see your point. I'm writing an Entity-Component system for fun and learning, so I'\m trying to understand precisely what each piece does. (TypeId seems a bit overkill.)

It's a well-known fact in numerics that you can just test all f32s (https://randomascii.wordpress.com/2014/01/27/theres-only-four-billion-floatsso-test-them-all/) but you can't test all f64s -- for the same reason, no you won't practically run out of u64s (and, indeed, u64 is sufficient to address any memory that can ever be constructed according to known physics).

1 Like

If you need process-wise unique id, check https://crates.io/crates/snowflake

It uses AtomicUsize + thread_local u64 counter so it should be unique enough for any non-distributed use case and fast enough for most cases. Of course it takes 128bit in memory, though.