Why Does Actix-web So Much better Than My Tokio Web Server Perform?

I’ve been experimenting with Rust web frameworks and wrote a simple HTTP server using Tokio to compare its performance against Actix-web. However, my wrk benchmark results show that my code lags far behind Actix-web in throughput, latency, and stability—my server even reports a ton of Socket errors. I’d love to get some insights from the community: why is there such a huge gap, and how can I optimize my implementation?

Actix-web Code and Results

use actix_web::{web, App, HttpServer, Responder};

async fn hello() -> impl Responder {
    "Hello World"
}

#[actix_web::main(worker_threads = 16)]
async fn main() -> std::io::Result<()> {
    HttpServer::new(|| App::new().route("/", web::get().to(hello)))
        .bind("127.0.0.1:3000")?
        .run()
        .await
}
wrk -t10 -c500 -d10s --latency http://localhost:3000
Running 10s test @ http://localhost:3000
  10 threads and 500 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     1.29ms    1.60ms  27.19ms   84.58%
    Req/Sec    51.87k    11.02k  102.08k    74.80%
  Latency Distribution
     50%  386.00us
     75%    1.91ms
     90%    3.60ms
     99%    6.85ms
  5177558 requests in 10.06s, 632.03MB read
Requests/sec: 514689.08
Transfer/sec:     62.83MB

My code and result

use tokio::net::{TcpListener, TcpStream};

#[tokio::main]
async fn main() -> std::io::Result<()> {
    let listener = TcpListener::bind("127.0.0.1:3000").await?;
    println!("http://localhost:3000");

    loop {
        let (stream, _) = listener.accept().await?;
        tokio::spawn(handle_connection(stream));
    }
}

async fn handle_connection(mut stream: TcpStream) -> std::io::Result<()> {
    use tokio::io::{AsyncReadExt, AsyncWriteExt};
    let mut buffer = [0; 1024];
    stream.read(&mut buffer).await?;
    let response = "HTTP/1.1 200 OK\r\n\r\nHello, World!";
    stream.write_all(response.as_bytes()).await?;
    Ok(())
}
wrk -t10 -c500 -d10s --latency http://localhost:3000
Running 10s test @ http://localhost:3000
  10 threads and 500 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     6.59ms  659.28us  14.23ms   77.72%
    Req/Sec     7.20k   344.31    13.57k    90.31%
  Latency Distribution
     50%    6.61ms
     75%    6.95ms
     90%    7.28ms
     99%    8.05ms
  717430 requests in 10.10s, 34.90MB read
Requests/sec:  71038.19
Transfer/sec:      3.46MB

How many threads does tokio use? I see that there are 16 for actix. Try giving tokio at least as many.

Also, are you maybe closing the TCP connection after handling each request in the tokio version? If the client has to do one complete TCP dance for each request that would slow things down considerably.

1 Like

How to reuse TCP connection

Run your accept loop in a spawned task.

#[tokio::main]
async fn main() -> std::io::Result<()> {
    let listener = TcpListener::bind("127.0.0.1:3000").await?;
    println!("http://localhost:3000");

    tokio::spawn(async move {
        loop {
            let (stream, _) = listener.accept().await?;
            tokio::spawn(handle_connection(stream));
        }
    }).await.unwrap();
}
2 Likes

I cannot replicate the difference. In both cases I get the same thoughput. Which features have you enabled in tokio? In my case I run with features = ["full"] in Cargo.toml.

1 Like

i run the code you give, but the result is not good like actix-web

use tokio::net::{TcpListener, TcpStream};

#[tokio::main(flavor = "multi_thread", worker_threads = 16)]
async fn main() -> std::io::Result<()> {
    let listener = TcpListener::bind("127.0.0.1:3000").await?;
    println!("http://localhost:3000");

    tokio::spawn(async move {
        loop {
            let (stream, _) = listener.accept().await.unwrap();
            tokio::spawn(handle_connection(stream));
        }
    })
    .await
    .unwrap();

    Ok(())
    // loop {
    //     let (stream, _) = listener.accept().await?;
    //     tokio::spawn(handle_connection(stream));
    // }
}

async fn handle_connection(mut stream: TcpStream) -> std::io::Result<()> {
    use tokio::io::{AsyncReadExt, AsyncWriteExt};
    let mut buffer = [0; 1024];
    stream.read(&mut buffer).await?;
    let response = "HTTP/1.1 200 OK\r\n\r\nHello, World!";
    stream.write_all(response.as_bytes()).await?;
    Ok(())
}

got:

wrk -t10 -c500 -d10s --latency http://localhost:3000
Running 10s test @ http://localhost:3000
  10 threads and 500 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     6.29ms  506.55us  15.13ms   77.90%
    Req/Sec     7.65k   292.04    14.31k    93.21%
  Latency Distribution
     50%    6.31ms
     75%    6.57ms
     90%    6.82ms
     99%    7.34ms
  762430 requests in 10.10s, 23.27MB read
  Socket errors: connect 0, read 762430, write 0, timeout 0
Requests/sec:  75484.00
Transfer/sec:      2.30MB

i use features ["full"], too.

[package]
name = "hello"
version = "0.1.0"
edition = "2021"


[dev-dependencies]
criterion = "0.5.1"

[[bench]]
name = "benchmark"
harness = false

[dependencies]
actix-web = "4.9.0"
futures = "0.3.31"
reqwest = "0.12.12"
tokio = { version = "1.43.0", features = ["full"] }

I guess actix_web did not read data or allocate the buffer like your tokio example did.

1 Like

thank you very very much,
make tcp keep-alive is right, it is vey fantastic when i make out the problem finally

make the tcp keep-alive is solution, thank your code

1 Like