Warp: allow certificate-less access on certain port

Hi, I'm using the Rust warp web server framework to build a web server that can be accessed from two kinds of clients: 1. those that have valid certificates to authenticate themselves, which are clients outside of the physical machine the server is running on, and 2. those that do not have any certificates to authenticate themselves, but are running on the same physical machine, and which should be able to access the server on one specific port (which will be secured through a firewall later on). My question is how do I implement this distinction between the two potential clients? Any ideas/links from the documentation?

You should be able to use a filter to check if the client is connecting from 127.0.0.1 (or the IPv6 equivalent)

Thanks for the answer. As far as I understand the project I am working on (I am new to warp as well), I don't have to differentiate between client's actual addresses (i.e. localhost vs other), I should make the distinction based on the port that the client wants to connect on. I think warp listens on only one port by default, would it be possible to listen on a second port as well?

You could spawn two servers, one listening on the port for secure outside access and one listening on the port for insecure access without a certificate. Pseudo-code:

#[tokio::main]
fn main() {
    let server_secure = warp::serve(/* service */)
        .run(([0, 0, 0, 0], 443));

    let server_insecure = warp::serve(/* service */)
        .run(([0, 0, 0, 0], 80));

    let _ = tokio::join!(
        tokio::task::spawn(server_secure),
        tokio::task::spawn(server_insecure),
    );
}
1 Like

So you would configure the certificates like so:

#[tokio::main]
fn main() {
    let server_secure = warp::serve(/* service */)
        .tls().client_auth_required(config.client_ca_path).cert_path(config.cert_path).key_path(config.key_path)
        .run(([0, 0, 0, 0], 443));

    let server_insecure = warp::serve(/* service */)
        .run(([0, 0, 0, 0], 80));

    let _ = tokio::join!(
        tokio::task::spawn(server_secure),
        tokio::task::spawn(server_insecure),
    );
}

Does this also make sense when the service you provided with /* service */ should just be the same for both the secure and insecure version? It's just about using the different ports for the same "service" (and the ports will be protected outside of the rust project), so it seems a little overkill

Building on the suggestion from @jofas you could start up server_secure as a reverse proxy and pass all requests onto server_insecure, which only listens to local connections.

1 Like

The readme for the warp extension details only how to forward each individual request handling (you always have to add sth like .and( reverse_proxy_filter("".to_string(), "http://127.0.0.1:8080/".to_string()) to your filters). Is it possible to simply forward all filters?

You can forward all requests using warp::any(), so something like:

// unsecured server handles application logic
let handler1 = warp::path!(...).map(...);
let handler2 = warp::path!(...).map(...);
// etc.

// spawn unsecured server
tokio::spawn(
    warp::serve(
        handler1
        .or(handler2)
        .or(...)
    ).run(([127, 0, 0, 1], 8080)));

// Forward request to localhost on other port
let app = warp::any()
            .and(
                reverse_proxy_filter(
                    "".to_string(),
                    "http://127.0.0.1:8080/".to_string()
                )
                .and_then(log_response),
            );
// spawn proxy server
warp::serve(app).run(([0, 0, 0, 0], 3030)).await;
2 Likes

I am fairly new to futures and tokio in Rust. What is the difference between these three versions, which all seem to compile and do the same thing. Note: I definitely care about performance, and my naive thought is that the async runtime (tokio here) should probably know whether the underlying has multiple cores and should split tasks into multiple threads, especially since we have long living tasks here. Is it reasonable to expect tokio to do that, and if so which of the below solutions does it? Maybe I am also mistaken in my assumption that letting the asyn runtime split the two servers onto a seperate thread each would be the most performant version, maybe running them concurrently on a single thread is actually better since, given that there isn't constant traffic, I assume the async functions using warp will be "sleeping"/"idle" most of the time.
So here are the different versions I came up with:

  1. This the version @jofas sent. I am using tokio as an async runtime (I think that's what it's called), but I am still curious why you have used tokio::join! instead of more standard solutions not coupled to tokio. Is this version maybe more performant than the versions below? What do the spawns do?
        let server_secure = warp::serve(routes.clone())
            .tls()
            // .client_auth_required_path(config.client_ca_path)
            .cert_path(config.cert_path)
            .key_path(config.key_path)
            .run(socket_secure);

        let server_insecure = warp::serve(routes).run(socket_insecure);

        let _ = tokio::join!(
            tokio::task::spawn(server_secure),
            tokio::task::spawn(server_insecure),
        );
  1. If performance was equal to the above, I'd say this is a little cleaner, as it doesn't rely on using tokio and is also nice and compact in a concise macro. But please do correct me if I'm wrong and this is actually different in any way to the above besides the syntax.
        let server_secure = warp::serve(routes.clone())
            .tls()
            // .client_auth_required_path(config.client_ca_path)
            .cert_path(config.cert_path)
            .key_path(config.key_path)
            .run(socket_secure);

        let server_insecure = warp::serve(routes).run(socket_insecure);

        futures::join!(server_secure, server_insecure);
  1. I am not quite sure what the difference between the join! macro and this join function, but it seems similar to the 2. version. Again, please enlighten me about any differences to the other two above.
        let server_secure = warp::serve(routes.clone())
            .tls()
            // .client_auth_required_path(config.client_ca_path)
            .cert_path(config.cert_path)
            .key_path(config.key_path)
            .run(socket_secure);

        let server_insecure = warp::serve(routes).run(socket_insecure);

        futures::future::join(server_secure, server_insecure).await;

Also, dumb question: Why do I not have to use await in any of these? I thought that was the centre-piece of async in Rust.
Looking forward to you answers! Thanks!

No particular reason. I just like tokio :slightly_smiling_face:.

The spawns are necessary to create asynchronous tasks. From the docs:

Spawning a task enables the task to execute concurrently to other tasks. The spawned task may execute on the current thread, or it may be sent to a different thread to be executed. The specifics depend on the current Runtime configuration.

Just spawning your futures, like you do in versions two and three, will cause both servers to be polled on the current thread. From the tokio::main docs:

Note that the async function marked with this macro does not run as a worker. The expectation is that other tasks are spawned by the function here. Awaiting on other futures from the function provided here will not perform as fast as those spawned as workers.

Now why I chose to spawn a new task for each server to run in, I just assumed it would be faster than running both servers from the main function directly. Under the hood, the server will spawn a new task for each incoming connection anyway, it's just about listening for new connections (I think, maybe there's some more work that needs to be done when a new connection is opened before a task is spawned that handles the connection, but I don't think so as you want to go back to listening for new connections as fast as possible). I actually have no idea whether this will be faster or not. I personally would only trust a solid (as close to real workload as possible) benchmark or stress test to tell me what will be more performant.

You do have to in the last one. The two join! macros poll the futures directly, so you don't have to await the futures explicitly.

I wouldn't say .await is the centre-peace of async in Rust. It's just very convenient for us users to be able to call .await instead of having to worry about polling our futures by hand. I'd say executors like tokio are the centre-peace of async in Rust. Maybe you'll find this section from the tokio docs an interesting read?

1 Like

Ok, thanks for the long reply! I'm not sure if this depends on the tokio runtime configuration, but if I had multiple threads/cores available, would tokio spawns use one thread each, as opposed to the spawning futures like versions 2 and 3, which only poll on one thread?

You don't have control over how tokio's runtime utilizes the worker threads it spawns when you use the multithreaded runtime. But yes, the scheduler would utilize two threads of the N threads you have available (tokio's runtime is a work-stealing scheduler, so one idle worker thread would try to steal tasks from another, busy worker thread). Now what goes on under the hood of warp/hyper, I'm not so sure. Maybe they spawn the listener in a task, maybe not. I don't know, so I made sure the listeners are executed as tasks by doing so explicitly.

1 Like

Thanks for the detailed explanation on the differences between the tokio spawn version and the join await versions.
I have one more question about the 2 join await versions:
Is there a semantic difference between the two versions of join (macro and function)? I also found the join_all function, which looks similar to join!, just that it takes and returns a list instead of a tuple. join! claims that it "polls both futures concurrently and therefore is more efficient". However, the join function also seems to "join the results of two futures". Either I don't know enough about aync rust yet to understand the subtle difference here, or there is no difference but the two are just syntax sugar for the same semantics. I am leaning towards the join! macro, but I'm not sure it makes a difference.

No, they look pretty much the same to me (source of join!) and (source of join).

More efficient than calling (a.await, b.await)—which will poll b only after a is ready and is therefore less efficient—, not more efficient than futures::future::join.

2 Likes

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.