In my codes, I have a data: HashMap<i32, Vec<f32>>, which has more than 5000 key-value pairs, and each value (the Vec<f32>) has a length bigger than 100_000. When I call data.clear(), it takes more than 150 seconds. I thought it was a problem about computer hardware.
My computer harderware:
OS: Windows (x86_64-w64-mingw32)
CPU: 20 × 12th Gen Intel(R) Core(TM) i7-12700K
WORD_SIZE: 64
RAM: 128G, 3600MHz
DISK: volume 466G, Disk Random 16.0 Write: 394.20 MB/s. Disk Random 16.0 Read 568.70 MB/s.
If I need to accelerate data.clear() and put others aside, what should I do?
I made some simple tests and figured data.clear() should cost ~160ms. It's not something that hardware spec matters that much (~1000x). Maybe check your code first, or post a minimum repro so we can try to figure out what's wrong.