How can I fix Streaming timeout problem in Actix-web

I would like to use Actix-web as a simple proxy server, but when I'm using huge files in streaming mode,it server occurs timeout error, just a small part of file will be download.

I'm really confused because it's a sample code from actix-web examples !

example code :

extern crate actix;
extern crate actix_web;
extern crate env_logger;
extern crate futures;

use actix_web::{
    client, middleware, server, App, AsyncResponder, Body, Error, HttpMessage,
    HttpRequest, HttpResponse,
};
use futures::{Future, Stream};

/// Stream client request response and then send body to a server response
fn index(_req: &HttpRequest) -> Box<Future<Item = HttpResponse, Error = Error>> {
    client::ClientRequest::get("http://127.0.0.1:8081/")
        .finish().unwrap()
        .send()
        .map_err(Error::from)          // <- convert SendRequestError to an Error
        .and_then(
            |resp| resp.body()         // <- this is MessageBody type, resolves to complete body
                .from_err()            // <- convert PayloadError to an Error
                .and_then(|body| {     // <- we got complete body, now send as server response
                    Ok(HttpResponse::Ok().body(body))
                }))
        .responder()
}

/// streaming client request to a streaming server response
fn streaming(_req: &HttpRequest) -> Box<Future<Item = HttpResponse, Error = Error>> {
    // send client request
    client::ClientRequest::get("https://gemmei.ftp.acc.umu.se/debian-cd/current/amd64/iso-cd/debian-9.7.0-amd64-netinst.iso")
        .finish().unwrap()
        .send()                         // <- connect to host and send request
        .map_err(Error::from)           // <- convert SendRequestError to an Error
        .and_then(|resp| {              // <- we received client response
            Ok(HttpResponse::Ok()
               // read one chunk from client response and send this chunk to a server response
               // .from_err() converts PayloadError to an Error
               .body(Body::Streaming(Box::new(resp.payload().from_err()))))
        })
        .responder()
}

fn main() {
    ::std::env::set_var("RUST_LOG", "actix_web=info");
    env_logger::init();
    let sys = actix::System::new("http-proxy");

    server::new(|| {
        App::new()
            .middleware(middleware::Logger::default())
            .resource("/streaming", |r| r.f(streaming))
            .resource("/", |r| r.f(index))
    }).workers(1)
        .bind("127.0.0.1:8080")
        .unwrap()
        .start();

    println!("Started http server: 127.0.0.1:8080");
    let _ = sys.run();
}

Dependencies:

[dependencies]
env_logger = "0.5"
futures = "0.1"
actix = "0.7"
actix-web = { version="0.7", features=["ssl"] }

given error on logs

ERROR 2019-02-09T18:10:16Z: actix_web::pipeline: Error occurred during request handling: Timeout while waiting for response
ERROR 2019-02-09T18:10:16Z: actix_web::server::h1: Unhandled error1: Timeout while waiting for response

As you can see in this picture, it's data haven't content length and file name.
Sample Picture

You're running into a client timeout issue. The actix client will abort connections that exceed its default timeout value, which is five seconds. Try setting the timeout to something like 3 minutes:

// send client request
client::ClientRequest::get("https://gemmei.ftp.acc.umu.se/debian-cd/current/amd64/iso-cd/debian-9.7.0-amd64-netinst.iso")
    .timeout(std::time::Duration::new(180, 0))
1 Like

Thank you @Ophirr33, it could help to fix my problem. but another problem still exists.
The new problem is related to super slow connections, if someone wants to download huge file with super slow connections, the connection will be dropped easily. Do you have any suggestion to preventing this ?

I can increase timeout time but I'm worry about performance and DoS slow HTTP attack.

So many thanks.

Preventing DoS in an production environment isn't something I'm qualified to give advice on, but here's a couple of thoughts:

  • you're using a back pressured connection, so you should only be fetching data that the client is actively reading. So the client still has to do some work to keep the connection
  • you could use api tokens and drop connections from accounts that are keeping too many connections open
  • you could set a minimum supported byte/sec, and create a stream adaptor of sorts to enforce that throughput on the underlying stream
  • where possible, support resumable downloads. If the client hits a timeout, it could retry and not lose any progress
  • set your file descriptor limit high, and actix should have no problem supporting tons of connections that aren't saturating the NIC
1 Like

Even if you set the file descriptor limit high, there are still DoS attacks based on opening a large number of connections and the client never reading any data, using up unlimited numbers of file descriptors indefinitely.

I recommend some sort of minimum supported byte/sec thing as @Ophirr33 suggested.

1 Like