Large vectors take a long time to compile

I was working on this problem in Rust: https://leetcode.com/problems/subarray-sum-equals-k/

One of the test cases is a vector of 20,000 elements. It takes a long time to compile the code (about 35 seconds) and I have a pretty powerful MacBook Pro. I've narrowed down the problem to this long vector.

Is this a known problem?

PS: the code runs very fast after it compiles.

Without seeing the code it's hard to say. Please share an example on https://play.rust-lang.org

It is a known issue. It seems like a llvm bug, but who knows.

3 Likes

It may be an overkill for an exercise, but in general if you have lots of data to embed in the program, you can use include_file!().

It loads binary data as a bunch of unaligned bytes, so for bigger types you'd need serialization and deserialization steps. If you want to be super-efficient you could use ptr::read_unaligned to read binary representation directly. Quick'n'dirty way would be to include JSON and copy data out with serde_json::from_slice.

4 Likes

@Hyeonu thanks for sharing the known issue. It's a bit crazy that compile time slows down quadratically as the vector/array size grows.

1 Like

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.