Rust binary works fine inside Docker container, but not reachable outside

I'm working on a Windows 11 Machine with Docker Desktop.

I made a simple Rust app that exposes an API on "http://localhost:8000/".

I run the container while making sure to bind the port 8000 of the host with the container's, port where the API is exposed :

docker run -p 8000:8000 --rm my-rust-app

When I go inside that container to curl the API I get the expected response :

// 9ca37d3529c9 is the container ID
> docker exec -it 9ca37d3529c9 /bin/bash

root@9ca37d3529c9 > ls
basic_rust_app

root@9ca37d3529c9 > curl http://localhost:8000/
Hello world!

So far, everything works fine.

I think Docker on Windows run on a VM. So, I tried accessing the Rust app from the ip address used by Docker Subnet : 192.168.65.0/24
This IP Address can be found on my Docker desktop in Settings > Resources > Network

On my web browser, I tried to reach the url http://192.168.65.2:8000/ but I keep getting timeout errors.

Did I forget something here ?

Edit :
I'm not sure the dockerfile will be helpful but here is it :

# Use a Rust base image with Cargo installed
FROM rust:1.78.0 AS builder

# Set the working directory inside the container
WORKDIR /usr/src/app

# Copy the Cargo.toml and Cargo.lock files
COPY Cargo.toml Cargo.lock ./

# Now copy the source code
COPY ./src ./src

# Build your application
RUN cargo build --release

# Start a new stage to create a smaller image without unnecessary build dependencies
FROM debian:bookworm-slim

# Set the working directory
WORKDIR /usr/src/app

# Copy the built binary from the previous stage
COPY --from=builder /usr/src/app/target/release/basic_rust_app ./

# Expose any necessary ports
EXPOSE 8000

# Command to run the application
CMD ["./basic_rust_app"]

You need to share your rust code, but it's likely you're encounter an error where you are binding to a port only on the loopback interface.

This will only work on localhost, but not externally (even if you forward the ports, because only public ports are forwarded).

The solution is to make sure you are binding to an external interface. I don't know what library you're using, but for axum it might look something like:

let listener = tokio::net::TcpListener::bind("0.0.0.0:8000").await.unwrap();
axum::serve(listener, router).await.unwrap();

Here the important part is the 0.0.0.0. It's a special address which is actually invalid, but in this context it means your application should listen to that port, 8000, on all interfaces. In your code you'll have to change 127.0.0.1 to 0.0.0.0.

Hi richardscollin,
Thanks for your feedback.

I didn't expect to have to listen to that port on all interfaces. I did it and it worked.
Just had to replace 127.0.0.1 by 0.0.0.0 for the IP Addresses.

Here is my new code :

// main.rs
use actix_web::{App, HttpServer, middleware, web};

mod db;
mod services;
mod tests;


#[actix_web::main]
async fn main() -> std::io::Result<()> {
    env_logger::init_from_env(env_logger::Env::new().default_filter_or("info"));

    log::info!("starting HTTP server at http://0.0.0.0:8000");

    HttpServer::new(|| {
        App::new()
            // enable logger
            .wrap(middleware::Logger::default())
            .service(web::resource("/index.html").to(|| async { "Hello world!" }))
            .service(web::resource("/").to(services::data_processing::index))
                .service(web::resource("/hello/{name}").to(services::data_processing::greet))
    })
        .bind(("0.0.0.0", 8000))?
        .run()
        .await
}

// module services
use actix_web::{web, Responder, HttpRequest};

//#[get("/hello/{name}")]
pub(crate) async fn greet(name: web::Path<String>) -> impl Responder {
    println!("request received");
    format!("Hello {name}!")
}

pub(crate) async fn index(req: HttpRequest) -> &'static str {
    println!("REQ: {req:?}");
    "Hello world!"
}

The thing is, I still don't understand why I had to listen on all ports.
When I containerized Angular applications in the past, I'm not sure I had to think about internal or external ports. I did the port binding and it just worked.

What you mentioned "external interface", can you tell me more about it ?
I feel like I'm misunderstanding the documentation.

Docker on Windows runs in a VM -- traffic from outside that VM will not arrive on the loopback interface inside that VM. This includes traffic from the host OS (Windows). So, therefore, you must bind to the interface which bridges (or otherwise connects) the host OS to the guest VM -- without knowing the precise IP address to bind to, the simplest option is to use "any IPv4 address", also known as "0.0.0.0".

NB: I have not actually used Docker for windows, but this is how Docker for macOS operates, and I suspect it's a strikingly similar arrangement.

1 Like

Thanks for taking the time to share this.

I usually did development on Ubuntu until now.
Developing on Windows is so different.

It feels like I have to relearn stuff that were obvious to me.

Yeah, the added VM complicates things, conceptually.

However, it would work the same way if you were running the guest VM on a Linux host -- the difference here is you don't have to, as docker managed containers execute natively on a Linux host and therefore don't need to be executed inside a nested virtual machine.

1 Like

Essentially: docker runs containers on a Linux host. If you're already in Linux, great. If you're not, it has to run them somewhere else: in Windows and macOS, this somewhere else is a hidden-from-plain-sight VM that's automatically managed by Docker.

1 Like

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.