Significance of workers in an actix-web server

I have got a standard actix-web server(plucked from the that aims to log the incoming request into a file, process it, and then log the response into another file, before sending it out to the client. I tried to "benchmark" it by sending out 50000 requests using curl in a shell script. However, I noticed that no matter what the number of workers I set for the server ( 1,2,4,6,8), the whole thing always takes roughly the same amount of time. Am I "benchmarking" it wrong or do I have a faulty understanding of what workers are ?
Please note that I am not running the server on actix-web runtime but on tokio's. I do not specify the number of worker threads for tokio. However, even if do explicitly set the number of worker threads for the runtime, I don't see any improvement. What am I missing ?

Edit: when I increase the number of threads I do take care to put the file behind a Mutex. This is something else that is bothering me, because I did not see a difference in performance between std::sync::Mutex or tokio::sync::Mutex

Generally, when you use async/await, you can execute many things on a single thread, so increasing the number of workers is not guaranteed to help. It's hard to say more based on what you've said here.

Some of the things you say make me worried about blocking the thread. Please read this article: Async: What is blocking?

Actix-web does not really have its own runtime. The "actix-web runtime" is just several single-threaded Tokio runtimes, one for each thread you want to use.


thanks for taking the time out. Apologies if the question sounds lacking.Here's a close representation of the binary that I am working with:
async fn main() {

tokio::spawn(async move {
actix-web_server(); // 1 worker, logs request to file2, doesn't block while processing, logs response before sending it


The answer does go a couple steps towards clarifying a few things though. Thanks again.

If, instead of directly writing to the file, you have a channel where each actix worker sends the log message to the channel, and a single task owns the file and loops writing the messages from the channel to the file, do you see more changes as you change the number of workers?

Without looking at your code, it's impossible to be confident about what's happening - but what you're describing sounds like a situation where the time taken per worker is dominated by synchronizing with other workers (via a Mutex or otherwise), and thus you're not getting the benefits of concurrency. By changing things around so that you remove the need for a Mutex completely, you may find the code works better.


Thanks for taking the time out. I can't unfortunately post the code here. However, the act of sending data down a channel was indeed going to be my next step. Your answer does validate the direction my code is heading towards. Will post the results once I am done.

So , it turns out , that my benchmarking script had a fault. increasing the number of workers does lead to better performance, but my script was sending requests serially rather than in parallel. Hence, the lack of improvement. Also , I found that the channel based approach was ~20% more performant than the lock based approach. Thanks a lot for your time, help and insight, even with the lack of info at hand!!


This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.