I'm working on a program that needs to send files as fast as possible to a server, but I'm running into the limit of open file descriptors, resulting in a "too many open files" error.
I can work around this by reducing the concurrency rate manually, but that isn't really flexible enough since I don't want to hardcode something like that.
I've also thought about using retry logic for this, but that also seems kind of messy to just mindlessly keep slamming the filesystem with retries until it works again.
Is there anything that is more ideal for this? A kind of "file descriptor backpressure" crate or pattern?
Is it possible to have some kind of async system that creates a queue and moves on to the next file to open whenever it is able to without having to deal with these errors?