Is there an in-process reverse proxy for synchronous Iron?


#1

Hi! I am using the Iron web framework. It is synchronous, but this is perfectly fine by me: my Handles don’t have any intrinsic async behavior: they just parse request, read some data from an embedded database and write the output (i.e, they are CPU bound). The only real async behavior comes from http itself: that is, if a dozen slow clients connect to my server simultaneously, they’ll saturate all threads from threadpool which will be waiting for bytes from HTTP connections to arrive. I’d like to avoid this without making my whole server async.

My understanding is that the common solution to this problem is to set-up a reverse proxy like nginx in front of my server, which would deal with slow clients in a resource efficient non-blocking manner, allowing my own server to spend time/threads only on the actual complete request processing. However this approach comes with some operational hosts: I can’t just cargo run anymore, I need to install, configure and run nginx separately.

Is there any way to run a similar reverse-proxy directly in-process? Or am I completely misunderstanding the whole think about async, reverse proxies and stuff ? :slight_smile:


#2

Well, this was fun:

extern crate futures;
extern crate hyper;
extern crate tiny_http;
extern crate tokio_core;

use std::error::Error;
use std::thread;

use futures::future;
use futures::{Future, Stream};
use hyper::{Client, StatusCode, Uri};
use hyper::client::Connect;
use hyper::server::{Http, Request, Response, Service};
use tokio_core::reactor::Core;

#[derive(Clone)]
struct ReverseProxy<T> {
    client: Client<T>,
    url: Uri,
}

impl<T: Connect + Clone> Service for ReverseProxy<T> {
    type Request = Request;
    type Response = Response;
    type Error = hyper::Error;
    type Future = Box<Future<Item=Self::Response, Error=Self::Error>>;

    fn call(&self, _req: Request) -> Self::Future {
        let resp = self.client.get(self.url.clone()).then(|r|
            match r {
                Ok(_) => future::ok(Response::new()),
                Err(e) => {
                    let mut resp = Response::new().with_body(format!("{}", e));
                    resp.set_status(StatusCode::InternalServerError);
                    future::ok(resp)
                },
            }
        );
        Box::new(resp)
    }
}

fn main() {
    thread::spawn(move || {
        use tiny_http::{Server, Response};

        fn serve() -> Result<(), Box<Error + Send + Sync + 'static>> {
            let server = Server::http("127.0.0.1:8099")?;
            for request in server.incoming_requests() {
                request.respond(Response::empty(200))?;
            }
            Ok(())
        }

        match serve() {
            _ => (),
        }
    });
    match do_proxy() {
        Ok(_) => (),
        Err(e) => println!("{}", e),
    }
}

fn do_proxy() -> Result<(), Box<Error>> {
    let mut core = Core::new()?;
    let rproxy = ReverseProxy {
        client: Client::new(&core.handle()),
        url: "http://127.0.0.1:8099/".parse()?,
    };
    let addr = "127.0.0.1:8098".parse()?;
    let server = Http::new().serve_addr_handle(&addr, &core.handle(), move || Ok(rproxy.clone()))?;
    let handle = core.handle();
    core.handle().spawn(server.map_err(|_e| ()).for_each(move |conn| {
        handle.spawn(conn.map(|_| ()).map_err(|_e| ()));
        Ok(())
    }));
    loop { core.turn(None); }
}

Now, both hyper and tiny_http create thread pools, and I haven’t tried to find out whether they step on each other’s toes. But the basic request chain seems to work.