Rust Web Hosting

So I've created a web app that runs on Actix and am looking to start hosting the website. I didn't know this, but it seems that most services assume you're using a mySQL database with php serverside. This is a problem for me because I use Redis as my database. Is it not possible to run Rust & Redis under shared hosting platforms? -Is the only available option for me to try to get root access under VPS hosting and then run Rust & Redis? How does that work, do I just post the same code as I would use to launch locally? I'm kind of new to the whole hosting thing from transitioning from localhost to remote hosting so I would appreciate any tips. I'm currently eyeing hostwinds because they have pretty good pricing for VPS hosting and scale quite well. Thanks!

1 Like

What you probably are searching for is either (as you said) working directly on VPS, or a so-called Platform as a service.

Thing with VPS is: while you often want to have more flexibility for your web-server code, databases and caches should just work.

PaaS will manage your resources (databases, ...) for you, and then you provide some Docker-Container, or they have language specific scripts to support programming languages.

One working example for many use-cases is Heroku, where you can find a rust buildpack. ( crates.io runs on Heroku for example). Here is an example app for Actix on Heroku.

Other examples I know of are

If you want to stay small, DigitalOcean also offers hosted databases and hosted object storage that you can easily use with their droplets

3 Likes

I just get a Debian Linux instance from Digital Ocean and install everything I want on there the same way I would a local Linux PC.

One day I should catch up with all the new fangled Cloud stuff.

1 Like

One day I should catch up with all the new fangled Cloud stuff.

:smile:

If you never needed, it's also fine.
Need probably comes with bigger production systems and thinking about where you want to spend your time (infrastructure or features)

1 Like

Google Cloud Platform (GCP) and Amazon Web Services (AWS) both offer hosted Redis. Both also offer scale-to-zero serverless hosting on top of docker with free tiers.

Hosted Redis probably isn't the cheapest option but it's probably the easiest.

Search the internet for "aws redis", "gcp redis", "aws lambda", and "gcp cloud run" for more info.

Also make sure to run the numbers before you commit, if you have the time & energy a $5 VPS like Linode might fit your needs better.

If I go down the path of VPS hosting, how would that affect the serverside code? Right now with actix, obviously it's binded to "127.0.0.1:8088. Would I just change the location it's binded to? Digital Ocean's droplets are interesting although beyond their basic droplets it doesn't seem to scale quite as well cost-wise.

1 Like

I'd suggest looking at Opalstack for shared hosting. I've tried hosting a small Actix project on it successfully. Haven't tried Redis but I believe they support it.

If you run on a VPS you get a computer and it will work same as on your own computer, so very small differences from local.

If you go with a cloud platform or hosted redis, you would connect to whatever the provider tells you.

Linode seems promising, but I'm a little concerned with their Network Out stat. Is that the amount of data that the web hoster can send to clients? In that case the bandwidth is very small compared to other providers.

1 Gb/second and 1TB transfer/month seems pretty decent to me for $5.

What amount of traffic are you expecting?

Actually I just ran the numbers again and maybe it's fine. It's just that my javascript range from 100kb to 800kb based on the subdomain. So assuming visitors want to visit the site with 800kb of javascript, I would only be able to service ~5000 people per second on the 20 dollar plan. That's a lot of people, but it's not as much as people that could be supported by the next limiting factor. I'm just trying to think ahead when the site scales later on.

Edit: I just ran the numbers again assuming that a person changes pages ~every 60 seconds that's basically going to be able to support 300000 people which is a ton of traffic.

Those are called luxury problems :slight_smile: At that point you might want to think about load balancing, autoscaling, and so on.

Also, wouldn't you benefit from a CDN at that point? Unless your js is unique per user.

If you have 5000 unique users per second, that's 18_000_000 unique users per hour.

At that point you probably (hopefully?) have a very successful business and have moved on to a hosted solution, or have a team of engineers.

But then again, I don't have any idea what you are building, could be a free service that you provide out of the goodness of your heart. In that case you definitely should be weighing all your options!

NOTE: saw your edit while writing, you seem to be on top of the calculations.

Anyway, good luck!

I run my things on a VPS I got at OVH, and it works very well for my purposes.

2 Likes

Thanks for the resources guys; I think I'm going to use Cloud VPS from Linode. Seems very affordable and scales very well.

1 Like

Hi,

Clever cloud may be helpful, very easy to setup: Clever Cloud - IT Automation Platform - The Ultimate Developers' Efficiency Platform
(if you're still evaluating the alternatives)

Hey,
I recommend to containerize your rust application with Docker then it will literally take less than 1 hour to switch between providers as your requirements evolve.

Regarding THE provider, I have good success with https://www.scaleway.com and https://heroku.com.

I greatly recommend the later because it takes care of the ops for you, but it may not be the cheapest option if your database is Redis (their Redis plans pretty are expensive, but still worth the price due to the huge amount of time you win on the ops side).

This is something we should be looking into, sooner rather than later.

Do you have any pointers to getting started with dockerizing Rust. We have a bunch of services in Rust tied together with CockroachDB (postgres would do) and the NATS messaging system.

The easiest and most pleasant experience I have had is with heroku. I have deployed a Rust actix app with redis and postgres on heroku.

My Procfile looks like this web: ./target/release/myapp
My Dockerfile looks like this.
Substitute myapp with the name of your application.

FROM rust:1.44.1 as base
ENV PKG_CONFIG_ALLOW_CROSS=1

WORKDIR /usr/src/myapp
COPY . .

RUN cargo build --release

FROM gcr.io/distroless/cc-debian10 as build

COPY --from=base /usr/src/myapp/target/release/myapp /usr/bin/myapp

EXPOSE 8080

CMD ["myapp"]

And in your main file, it is important that you use the PORT environment variable.
It is set by Heroku.

let port = std::env::var("PORT").unwrap();

    HttpServer::new(move || {
        App::new()
            .wrap(Logger::default())
            .app_data(templates.clone())
            .app_data(db_clone.clone())
            .route("/", web::get().to(controller::index))
            .route("/health", web::get().to(controller::health_check))
            .service(actix_files::Files::new("/static", "static/"))
    })
    .bind("0.0.0.0:".to_owned() + &port)?
    .run()
    .await

My repo is not yet open source, but here is the Dockerfile I use.
I build the project as a static binary with jemalloc (because musl's allocator is reputed to be slow) and then copy it in a scratch image for minimal size and attack surface:

####################################################################################################
## Build server
####################################################################################################
FROM rust:latest AS builder_rust

RUN rustup update
RUN rustup target add x86_64-unknown-linux-musl
RUN apt update && apt install -y musl-tools
RUN update-ca-certificates

# Create appuser
ENV USER=bloom
ENV UID=10001

# See https://stackoverflow.com/a/55757473/12429735RUN
RUN adduser \
    --disabled-password \
    --gecos "" \
    --home "/nonexistent" \
    --shell "/sbin/nologin" \
    --no-create-home \
    --uid "${UID}" \
    "${USER}"


WORKDIR /bloom

COPY ./ .
WORKDIR /bloom/bloom

RUN make build_static # equivalent to cargo build --target x86_64-unknown-linux-musl --release

####################################################################################################
## Final image
####################################################################################################
FROM scratch

# Import from builder.
COPY --from=builder_rust /etc/passwd /etc/passwd
COPY --from=builder_rust /etc/group /etc/group

WORKDIR /bloom

# Copy our builds
COPY --from=builder_rust /bloom/bloom/dist/ ./

# Use an unprivileged user.
USER bloom:bloom

EXPOSE 8080 8443
CMD ["/bloom/bloom", "server"]
# If some crashes or slowness are noticed when running the static rust binary with musl and Jemalloc
# see here: https://andygrove.io/2020/05/why-musl-extremely-slow/
1 Like