Rust release built web executable when run standalone supports less number of requests

hi there,

I am fairly new to Rust and just started creating some Rust actix-web apps to compare with the Java Spring Boot app. My test is a CRUD operation against a Cassandra database.

At the moment, I am testing both apps locally on my Macbook Pro. For Spring boot app, it runs with the command "Java -jar the-java-app.jar". It can take 1000+ virtual users with no problem.

For the Rust app, I found something which is only unique to Rust. if I use cargo run to start the application, it works as expected, It will take 1000+ virtual users, and the performance is slightly better than Spring Boot, which is expected.

However, if I directly run the executable under the target/release folder to start the app, then the Rust app only can take about 200 virtual users. If I add more load, it'll start throwing the "connection reset by peer" error. I tried "cargo build --release" and with or without "--target x86_64-apple-darwin", the result is the same.

The reason this is a concern is that when I deploy the Rust app, I will start the Rust app with just the executable as many online posts suggested. A sample dockerfile is as the following:

FROM rust:1.63.0 as builder
WORKDIR /app
...
RUN cargo build --release --target x86_64-unknown-linux-gnu
FROM debian:bullseye-slim
COPY --from=builder /app/.../release/rustapp /usr/local/bin/rustapp
CMD ["rustapp"]

So, I have a few questions,
Will this container also have the limitation that I have observed from my local laptop? Is this a concern for running this in a container on Kubernetes? if so, what is the configuration to fix the issue?
Am I missing something on my local settings to run Rust standalone app? Is there a way to let the Rust app utilize more resources to support more loads like "cargo run" can do?

Thanks,

Which throws the error? Your web server or the application you're using to test?

the web server.

That's an indication the problem lies with the test application.

I wonder if this is caused by macos firewall rules.

I seem to recall getting a prompt to "Accept incoming connections for cargo?" when using cargo run... not entirely sure.
But this wouldn't explain why it accepts 200 connections, then cancels the remaining exception.

Thinking it might be macos handling the "constantly changing executable" in the build dir differently from the "cargo" parent executable of the working cargo run method.

Might be worth a try inside docker, to see how macos handles that?

danjl1100, yes, I just did a test to run it on my local docker as a ubuntu 20.04 container, and no 200 vu limitation there, and no problem supporting the higher load. I also deployed to a kubernetes cloud and no problem there either.

so the issue is only with the standalone compiled executable. it is acceptable as long as running in the container doesn't have the same limitation.

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.