You might be putting the cart before the horse here.
A hashset uses heap allocations, yes, but these are ammortized so you only re-allocate infrequently and after that inserting is just a case of finding the right bucket and copying the value across.
The concern people have with things like hashsets isn't that they use dynamic allocations, but that they are designed to spread accesses out across the entire backing buffer instead of putting similar items next to each other. This can have an effect on performance because your CPU cache likes sequential accesses while random access means you'll get cache misses and need to make a trip to main memory more frequently.
If what you are actually looking for is a fast way to read/insert/delete items into a collection with no duplicates then rule #1 is to write benchmarks using typical data from your application and get numbers for the various alternatives.
You should also get numbers that show your bottleneck is actually with the hashset and not somewhere else... it'll be annoying to spend hours tweaking hashset lookups when your actual problem is a bad algorithm makes you do O(n3) lookups when O(n) was sufficient.
Another common concern is the hashing speed, but that can be addressed by using an alternate hashing algorithm with the standard HashMap / HashSet. Unfortunately, I don't know the favored alternatives off the top of my head.
Like the heap allocations, though, this is only a practical issue in a few edge cases-- Best to profile first and only spend effort fixing actual issues.
Like @2e71828 mentioned you can swap out the hashing algorithm to something that doesn't try to avoid DOS situations - I know the Rust compiler uses the fxhash crate internally for exactly this reason.
Also, depending on how much data you are storing inside the set/map, a legitimate alternative is to keep things in a Vec and do a linear scan to find the right item. Bjarne Stroustrup did a talk at one CppCon investigating the relative performance of different collections, but I can't remember the title.
If you need the absolutely fastest speed, then general advice can only get you so far. The performance of each option will depend on the workload you're putting on the collection, and the only way to eke out the last few percent of performance is to try everything and benchmark it with as close to a real situation as you can manage. Criterion is a good way to start writing high-quality benchmarks.
I was a bit surprised not to see nohash_hasher mentioned here yet. I'm not sure what keys you're working with, but if they can reasonably be kept to 64 bytes or less (maybe [char; 16]?) you could forego hashing computations entirely. Having your data on the heap may not be the biggest bottleneck here as the Rust compiler is pretty good at optimizing.
Then, depending on your data size and amount, you could also look into storage using arrayvec for a very limited, but potentially entirely stack-allocated option.
Further, if the keys could be defined at compile time, you could use an enum for those and I'm pretty sure that could be made to work with nohash_hasher.
If you know the max size in advance and don't want any heap-related slowdown during use, the fastest way is to initialize your set with HashSet::with_capacity(max_elements). Now there will be only 1 allocation at start, and your insertions and deletions will never reallocate. It'll be as fast as if it was on the stack !
(But that shouldn't stop you from also applying other optimizations hinted to in this thread)