Actix CORS permissions: remote server

I am trying to deploy a rust actix server on an AWS EC2 instance (with a fixed elastic ip) and access it from (any) remote client. I ensured the EC2 instance was correctly configured to allow access to the relevant port (8080) and confirmed the same by running a small python/flask api test on port 8080. This worked well I could access the api endpoint, using curl, from both the AWS EC2 server (localhost) and from my home device.

When I run an actix rust server instead of the python flask server, with CORS set to permissive I can query the endpoint from the localhost (i.e. the EC2 server) but when I try to access the endpoint from a remote server (my home device) there is no response (curl error 52 Empty reply from server).

I am pretty sure this must be related to CORS but I thought permissive was supposed to allow everything. I also tried allowing specific origins such as my home device but same outcome, same for cors.allow_any_origin(). Its kinda baffling - I hope there is a simple explanation - something I am missing.

My cargo.toml dependencies are:

actix-web = "4"
actix-cors = "0.6.4"

My test code (based on the Actix getting started example) with CORS setup:

use actix_web::{get, post, web, App, HttpResponse, HttpServer, Responder};
use actix_cors::Cors;

#[get("/")]
async fn hello() -> impl Responder {
    HttpResponse::Ok().body("Hello world!")
}

#[post("/echo")]
async fn echo(req_body: String) -> impl Responder {
    HttpResponse::Ok().body(req_body)
}

async fn manual_hello() -> impl Responder {
    HttpResponse::Ok().body("Hey there!")
}

#[actix_web::main]
async fn main() -> std::io::Result<()> {
    HttpServer::new(|| {

        let cors = Cors::permissive();

        App::new()
            .wrap(cors)
            .service(hello)
            .service(echo)
            .route("/hey", web::get().to(manual_hello))
    })
    .bind(("127.0.0.1", 8080))?
    .run()
    .await
}

Why? AFAIK Curl does not care about CORS unless you explicitly set the Origin header. Have you done so and received the error and otherwise didn't?

Ok, I may well be wrong about CORS being the cause - I am completely guessing. I did not set the origin I just use curl http://localhost:8080 when I am on the EC2 server and curl http://<elastic-ip>:8080 when I am on my local machine ...

The other thing that is throwing me a bit is that I set up logging using env_logger and set the logging level to "trace". I was expecting to see some message that the incoming request has been ignored/blocked for some reason but I do not. There is a no sign that it even receives the request from a remote origin.

However, the fact the python/flask server works and the rust/actix server doesn't makes me pretty sure that the problem must be with the actix server ...

Actix uses tracing, not log, so env_logger won't log anything. Try using tracing-subscriber instead of env_logger:

Would be helpful if you could share what you find in the trace.

1 Like

Thanks: got the tracing working and could see a lot of events when pinging it from the localhost. But there is no trace at all when I use curl from a remote server. This seems to imply that the actix server is not getting the call and something else is blocking it. But in this case, I can't explain how python/flask works.

Here is the trace for the successful call from localhost

[2023-05-07T09:38:07Z INFO  actix_server::builder] starting 1 workers
[2023-05-07T09:38:07Z INFO  actix_server::server] Actix runtime found; starting in Actix runtime
[2023-05-07T09:38:07Z TRACE actix_server::worker] starting server worker 0
[2023-05-07T09:38:07Z TRACE mio::poll] registering event source with poller: token=Token(2147483649), interests=READABLE
[2023-05-07T09:38:07Z TRACE actix_web::middleware::logger] Access log format: %a "%r" %s %b "%{Referer}i" "%{User-Agent}i" %T
[2023-05-07T09:38:07Z TRACE actix_web::middleware::logger] Access log format: %a %{User-Agent}i
[2023-05-07T09:38:07Z TRACE mio::poll] registering event source with poller: token=Token(0), interests=READABLE
[2023-05-07T09:38:07Z TRACE actix_server::signals] setting up OS signal listener
[2023-05-07T09:38:07Z TRACE actix_server::worker] service "actix-web-service-127.0.0.1:8080" is available
[2023-05-07T09:38:23Z TRACE mio::poll] registering event source with poller: token=Token(0), interests=READABLE | WRITABLE
[2023-05-07T09:38:23Z TRACE actix_http::h1::dispatcher] start flags: (empty)
[2023-05-07T09:38:23Z TRACE actix_http::h1::dispatcher] start timers:
[2023-05-07T09:38:23Z TRACE actix_http::h1::dispatcher]   head timer is inactive
[2023-05-07T09:38:23Z TRACE actix_http::h1::dispatcher]   keep-alive timer is inactive
[2023-05-07T09:38:23Z TRACE actix_http::h1::dispatcher]   shutdown timer is inactive
[2023-05-07T09:38:23Z TRACE actix_http::h1::dispatcher] end timers:
[2023-05-07T09:38:23Z TRACE actix_http::h1::dispatcher]   head timer is active and due to expire in 4521.413 milliseconds
[2023-05-07T09:38:23Z TRACE actix_http::h1::dispatcher]   keep-alive timer is inactive
[2023-05-07T09:38:23Z TRACE actix_http::h1::dispatcher]   shutdown timer is inactive
[2023-05-07T09:38:23Z TRACE actix_http::h1::dispatcher] end flags: STARTED
[2023-05-07T09:38:23Z TRACE actix_http::h1::dispatcher] start flags: STARTED
[2023-05-07T09:38:23Z TRACE actix_http::h1::dispatcher] start timers:
[2023-05-07T09:38:23Z TRACE actix_http::h1::dispatcher]   head timer is active and due to expire in 4521.288 milliseconds
[2023-05-07T09:38:23Z TRACE actix_http::h1::dispatcher]   keep-alive timer is inactive
[2023-05-07T09:38:23Z TRACE actix_http::h1::dispatcher]   shutdown timer is inactive
[2023-05-07T09:38:23Z INFO  actix_web::middleware::logger] 127.0.0.1 curl/7.88.1
[2023-05-07T09:38:23Z INFO  actix_web::middleware::logger] 127.0.0.1 "GET /hey HTTP/1.1" 200 10 "-" "curl/7.88.1" 0.000129
[2023-05-07T09:38:23Z TRACE actix_http::h1::dispatcher] end timers:
[2023-05-07T09:38:23Z TRACE actix_http::h1::dispatcher]   head timer is inactive
[2023-05-07T09:38:23Z TRACE actix_http::h1::dispatcher]   keep-alive timer is active and due to expire in 4520.8423 milliseconds
[2023-05-07T09:38:23Z TRACE actix_http::h1::dispatcher]   shutdown timer is inactive
[2023-05-07T09:38:23Z TRACE actix_http::h1::dispatcher] end flags: STARTED | FINISHED | KEEP_ALIVE
[2023-05-07T09:38:23Z TRACE actix_http::h1::dispatcher] start flags: STARTED | FINISHED | KEEP_ALIVE
[2023-05-07T09:38:23Z TRACE actix_http::h1::dispatcher] start timers:
[2023-05-07T09:38:23Z TRACE actix_http::h1::dispatcher]   head timer is inactive
[2023-05-07T09:38:23Z TRACE actix_http::h1::dispatcher]   keep-alive timer is active and due to expire in 4520.203 milliseconds
[2023-05-07T09:38:23Z TRACE actix_http::h1::dispatcher]   shutdown timer is inactive
[2023-05-07T09:38:23Z TRACE actix_http::h1::dispatcher] read half closed; start shutdown
[2023-05-07T09:38:23Z TRACE actix_http::h1::dispatcher] start flags: STARTED | KEEP_ALIVE | SHUTDOWN | READ_DISCONNECT
[2023-05-07T09:38:23Z TRACE actix_http::h1::dispatcher] start timers:
[2023-05-07T09:38:23Z TRACE actix_http::h1::dispatcher]   head timer is inactive
[2023-05-07T09:38:23Z TRACE actix_http::h1::dispatcher]   keep-alive timer is active and due to expire in 4520.1113 milliseconds
[2023-05-07T09:38:23Z TRACE actix_http::h1::dispatcher]   shutdown timer is inactive
[2023-05-07T09:38:23Z TRACE actix_http::h1::dispatcher] end flags: STARTED | KEEP_ALIVE | SHUTDOWN | READ_DISCONNECT
[2023-05-07T09:38:23Z TRACE mio::poll] deregistering event source from poller
1 Like

Unfortunately I'm not familiar with AWS. Maybe you can find some information in your EC2's logs or whatever the name for AWS's load balancing/proxying is. Maybe the forwarding mechanism does something unexpected that can be handled by flask out of the box but Actix just doesn't know what to do with it.

Yes, I think you are right. Thanks for your guidance on the logging.

I also tried a rust rocket server: it has the same problem as actix. Sure is a strange one.

1 Like

I see in that log the phrase actix-web-service-127.0.0.1:8080 which would imply the server is binding to 127.0.0.1 (aka localhost only). Have you checked that you are binding to 0.0.0.0 (all interfaces) instead?

2 Likes

Thank you! That was it.

You're not the first person to write 127.0.0.1 in Rust and not in Python, makes me wonder why it keeps coming up. I wonder if the examples use 127.0.0.1 and could maybe be changed.

Yes, kind of embarrassing that I missed something so obvious ... out of interest I took a look at the flask quickstart guide and it does a pretty good job of highlighting the 0.0.0.0 thing pretty much on page 1 under the heading "Externally Visible Sever". So I imagine that is something that also came up frequently enough:
https://flask.palletsprojects.com/en/2.3.x/quickstart/

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.