If you don't have an upper bound on the size of the chunks, using tokio::io::BufReader to get an AsyncBufRead instance may not work for you reliably. You can implement this fairly directly with a Vec though.
use futures_util::StreamExt;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
println!("starting");
let mut stream = reqwest::get("http://localhost:8080/").await?.bytes_stream();
let mut buffer = Vec::new();
while let Some(item) = stream.next().await {
for byte in item? {
// Might need to consider carriage returns too depending
// on how the server is expected to send the data.
if byte == b'\n' {
println!("Got chunk: {:?}", buffer.as_slice());
buffer.clear();
} else {
buffer.push(byte);
}
}
}
Ok(())
}
There are probably ways to make that more efficient depending on the average expected chunk size, but it does work for me as is.
Thanks @semicoleon , I had considered something similar too but was hoping to avoid it. That was my fallback plan though, thanks for confirming it wasn't too crazy of an idea!