Heh, I hoped for some existing utility! “Wait until some data is available, and then read all available data” sounds like a pretty primitive operation for an async world! But cooking up my own solution is also fine!
That kind of operation can be pretty dangerous - a sufficiently fast writer can OOM you. Reading just as much as you need and working on that avoids that by using the rest of the application logic as backpressure.
That's an excellent point, that's totally something I should handle, because that's for a terminal-shaped thingy...
This reminds me of a cute pattern in the Zig standard library, where all read_to_end-shaped APIs always takes an extra max_len argument as an upper bound on the amount of data you expect to read worst-case...