If I were to use benchmarks, I would find that single-threaded iterators would be faster than parallel iterators for some small stream of computation. However, as the amount of computation went up, the net sum of all the initializations needed to start the par iter would no longer be the bottleneck, thus parallel iterators would be faster at a certain point. Finding this "tipping point" can be done locally via benchmarking, but my question is: is there an established or generalized formula that takes environmental and hardware variables like clock cycles per second to determine this tipping point? Because this point can be different for each computer. Perhaps this can be found-out in real-time? Maybe, having a "warm-up" phase pre-init for the program would be good for finding these tipping points? Any advice from the pros?
Table 1 of this paper might be of interest.