I’m currently trying to diagnose a scaling problem I have observed in benchmarking my code. Specifically, I have a test case that should be O(N), whose run time is going up faster than that. Thus I know that somewhere in my code is something that is probably O(N^2), based on the slope on a log-log plot. The trouble is, that this behavior only shows up when the computation is taking about two minutes, but even then the nonlinear contribution is a pretty small fraction of the run time. I’ve attempted the obvious approach of profiling a large-N run, but nothing popped out.
It has occurred to me that maybe microbenchmarks would be more to the point. If I can benchmark separate functions, maybe I can more easily and cheaply identify which are scaling as I expect, versus which are not. I’ve not done any of this, mostly because I want to stick with stable rust. Presumably I could create benchmarks that are only used when building with the nightly compiler?
Any other suggestions for what to do? I’m guessing somewhere in my code is a
Vec::contains or similar (introduced to fix a bug, I’m sure), but it’s a pretty scary code, and I feel daunted at trying to find it.
Any suggestions would be welcome!