Reading from hyper body infinite

online log tail implementation


I am teaching my self the language by doing little projects. I'm a newbie in rust. Usally I hit the wall quite early.
Now I'm trying to write a tail implementation for "online log files".
Therefore I use the libraries Hyper and Tokio. No suprise :).
The following snippet probably can be written more compact. But it helps me to understand what is going on.

 let http_body: &mut Body = res.body_mut();
    loop {
        if http_body.is_end_stream() {
            println!("Ende Http");
        let data =;

        match data {
            Some(b) => match b {
                Ok(v) => tokio::io::stdout().write_all(&v).await?,
                Err(e) => return Err(Box::new(e)),
            None => tokio::time::delay_for(Duration::from_secs(3)).await,

This prints the content of webpage and fails (panics):
thread 'main' panicked at 'Receiver::next_message called after None',
note: run with RUST_BACKTRACE=1 environment variable to display a backtrace

The error makes sense, but I'm lacking of ideas how to implement this in a correct matter.

My goal: I want that this program to stay connected and that it continues polling data. If there is new data o print it or else wait and try it later. But just the new data.

Maybe you can point me into the right direction or you know of libraries where I can see the implementation.

The only idea what I can come up with is to implement the HttpBody Trait by copying the Body implementation provided by hyper and modifying it. Which is to hard for me at the moment. But at least I had an starting point.

Here is the code:


If data is None then the input has closed and there won't be any more.

Thank you for your reply!
Yes I understood that. My question is how do I need to approach this problem that NONE does not close the stream. I may need a totally different approach.

If data is None, you should return. There will not be more data later.

My english isn't the best, so I probably didn't express my self good enough.
I want to tail a log file from the web similar to the tail program which follows a local file.
e.g.: http://server/logfile (thats not a real link)
The logfile keeps growing.

But maybe I'm on a totally wrong path. I don't know.


Web servers would normally serve the version available when the request is served, so if more is appended later, the server wouldn't send the new data. You would need to either change the server to have it continually serve the file or perform a new request that requests only a range of the file.

If the server is changed to continually send more data, then your call to .data().await would sleep instead of returning None. It only returns None if the server has decided not to send more data.

1 Like

Good morning,

you are right. It is not possible to do this without changing the server. I had wrong expectations.

Thank you for your explanation.

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.