Why my version of tokio is sooo slow

hello everyone :smiling_face_with_three_hearts:

i am new to rust, and even newer to tokio-rs, i did run tinyhttp example on tokio and i did a benchmark, like this

tinyhttp

$ cargo run --release --example tinyhttp
$ wrk -c100 -d10s -t`nproc` http://127.0.0.1:8081
Running 10s test @ http://127.0.0.1:8081
  8 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   322.76us  297.86us  20.16ms   98.00%
    Req/Sec    37.68k     1.81k   47.92k    80.05%
  3025709 requests in 10.10s, 277.01MB read
  Non-2xx or 3xx responses: 3025709
Requests/sec: 299545.88
Transfer/sec:     27.42MB

as you can see from above, the number of requests is pretty high ~299546 per seconds

so i did try to redo the example, and i started with just a simple connection that send back "hello world" like this

use tokio::io::Result;
use tokio::net::TcpListener;
use tokio::stream::StreamExt;
use tokio::io::AsyncWriteExt;

#[tokio::main]
async fn main() -> Result<()> {
    let mut tcp_listener = TcpListener::bind("127.0.0.1:6431").await?;
    let mut incoming = tcp_listener.incoming();

    while let Some(Ok(mut stream)) = incoming.next().await {
        tokio::spawn(async move {
            stream.write(b"HTTP/1.1 200 OK\r\n\r\n").await.unwrap();
        });
    }

    Ok(())
}

then i run the same benchmark, using the following commands

$ cargo run --release
$ wrk -c100 -d10s -t`nproc` http://127.0.0.1:6431/
Running 10s test @ http://127.0.0.1:6431/
  8 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   288.62us  568.64us  37.32ms   98.29%
    Req/Sec   595.08    671.48     9.43k    94.89%
  47611 requests in 10.08s, 18.09MB read
  Socket errors: connect 0, read 998256, write 0, timeout 0
Requests/sec:   4724.41
Transfer/sec:      1.79MB

i was hoping for a higher number of requests per seconds (higher than tokio tinyhttp) because all i am doing is sending back a hello world, but to my surprise, its a very low number ~4725 requests per seconds.

can anyone please shed some light on why this is happening?

It's possible that the client you're using to connect to the webserver doesn't start reading the response until the server has fully read the request, so since you dont read it, that could cause issues with the client waiting around for the server.

@alice thank you for you response

i did change the code to read the request, like so

use tokio::io::Result;
use tokio::net::TcpListener;
use tokio::stream::StreamExt;
use tokio::io::AsyncWriteExt;
use tokio::io::AsyncReadExt;

#[tokio::main]
async fn main() -> Result<()> {
    let mut tcp_listener = TcpListener::bind("127.0.0.1:6431").await?;
    let mut incoming = tcp_listener.incoming();

    while let Some(Ok(mut stream)) = incoming.next().await {
        tokio::spawn(async move {
            let (mut reader, mut writer) = stream.split();
            let mut buf = [0; 8 * 1024];
            reader.read(&mut buf).await.unwrap();
            writer.write(b"HTTP/1.1 200 OK\r\n\r\n").await.unwrap();
        });
    }

    Ok(())
}

and the benchmark also changes, now it becomes Requests/sec: 32325.12 am i reading too slow, or there is a faster way to read from socket, i know about BufReader, but i don't think its needed here, since the data length sent is very low ~500 bytes, so i used 8kb in the buffer, and i called read on it? is it right? is there a faster way to read data? is my botleneck is reading data...?

1 Like

Note that the tinyhttp example serves multiple requests on a single connection. This is a a lot more efficient than creating a new connection for each request, which is what your server requires.

It gets even more efficient if the requests are pipelined -> the client sends the next request even before the server responded to the first one.

I think that difference explains like an order of magnitude in speed difference.

Please note that the tinyhttp example for this is technically incorrect. You should not process multiple requests unless the client requests to do so (via keepalive header).

But since it’s just a primitive example it doesn’t try to get all aspects of Http right.

2 Likes

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.