Hello people,
I've just published micro create for retrying failed mapping in iterator. I often write small scripts that fetch websites using iterators even though errors are not that often giving it another shot doesn't hurt especially if it's just matter of adding import and changing name of function.
use map_retry::MapRetry;
a.iter().map_retry(|a| failable(a)) // replace map with map_retry
I suppose this retries immediately? There is no gradual backoff strategy?
For network, I would imagine something like this with async fetch, where the fetch_url( "http:://...", timeout ).await; will retry several times to download the resource with a gradual backoff, before timing out.
It doesn't retry immediately, but instead at the end of normal iterator it goes over iterator with failed requests.
You are totally right that for network something like exponential backoff is better, but I didn't want to tie my crate with time as it's more general without.
The other idea I had would be to specify how often to check for failed iterator. E.g attempt to process 5 new and then check one from failed iterator. This would allow both to modify how long to wait before and would also allow better performance for iterators where you expect many failures