One of the test cases is a vector of 20,000 elements. It takes a long time to compile the code (about 35 seconds) and I have a pretty powerful MacBook Pro. I've narrowed down the problem to this long vector.
It may be an overkill for an exercise, but in general if you have lots of data to embed in the program, you can use include_file!().
It loads binary data as a bunch of unaligned bytes, so for bigger types you'd need serialization and deserialization steps. If you want to be super-efficient you could use ptr::read_unaligned to read binary representation directly. Quick'n'dirty way would be to include JSON and copy data out with serde_json::from_slice.