There is also the cost of allocation. That is in most cases neglectible in case of the limit where N grows. But not in all cases, as @2e71828 pointed out (e.g. if the alloc operation is worse than O(n)). Thus this "universally-known fact" has certain preconditions (that are met in most cases; so thanks for correcting me because I wasn't aware of that).
In particular, I was wrong about the allocation cost being relevant if it is O(1) or even O(n). Side note: That is because I looked at ALL insertions and not at a single insertion. It was a mistake and I didn't notice it because I'm not very familiar with complexity theory.
And not everyone is an expert in complexity theory, like me. So I'm happy about people taking time to explain things to me, including corner cases. Honestly, I wasn't aware that Vec::push is O(1), even when I do not use Vec::with_capacity. Thanks for clarifying that (whether through reference to "authoritative" sources or reasoning).
Yeah, all of these analyses are "wrong" in some sense, and are always under a certain set of assumptions for things. For example, we're ignoring the fact that addition isn't O(1) for unbounded integers.
Not to mention that O(1) random memory access is also a lie, and it's actually more like O(√N) in reality, since big-Oh is asymptotic: The Myth of RAM, part I
Are you familiar with the 'Word RAM model' typically used in analysis of algorithms? We're measuring ops in terms of log-n sized registers.
It is not a lie. O(1) does not measure "time of RAM / time of L! cache". O(1) is measuring # of memory accesses. Again, see Word RAM model of computation.
Hey, if you're going to use a model that only looks at asymptotics
(Though I also like the smaller-scale notes they have about it, like how it also applies for chips today since we basically can only make chips in a plane these days, or even just mundane ones like libraries being rougly √N too since we can't build them in a sphere and phase directly through the books to go pick them up.)