Is zig lang faster than rust?

There’s also Kostya’s benchmark:

Last updated 2021/12/22. Rust surpasses Zig in 4 benchmarks out of 6.


I think this summarizes what we've seen so far... Rust and Zig are in the same ballpark when it comes to performance, so the correct answer to "which language is faster?" is "it depends on how you write your code".

Both languages are languages that use LLVM to AOT compile to machine code, their standard libraries are both pretty well written, and both let you write code as fast/slow as you want.

As many people have mentioned, performance is a lot more nuanced than "language X is faster than Y". It's naive to expect anything more precise than rules of thumb and ballpark figures. The micro-benchmarks you often see when comparing languages are almost always overly simplistic to the point of being useless for anything other than religious wars on the internet.


Absolutely. They tend to devolve into boring transliterations of whatever version is currently winning, regardless of whether anyone would ever write it that way in reality in the language.


Completely agree. Plus, with just that number "4 out of 6", that's basically two thirds of a really small sample size, whatever the population is. Basically chance.


learn to code with performance in mind


Of course, speed of a programming language always depends on

  • the computational problem you address,
  • the way you use the language (e.g. idiomatic code vs optimized code).

Nonetheless, I feel like it's okay to colloquially assume that some languages are slower than others. Rust with unsafe certainly is faster than safe Rust. C is faster than (pure) Python. Machine code is faster than Java (in execution time, not in development time).

"Is zig lang faster than rust?" No idea, as I haven't heard of Zig before, but it seems to be an interesting approach. Thanks to the OP @Amigo for pointing me/us to it. I think it's always good to look at other languages to see if some things can be improved (or are slower for a good reason).

Why ask a question like "Is zig lang faster than rust?" – Because Rust aims to come with little runtime overhead, so it's just natural to become curious when some other language claims to be faster in some matters and to raise such a (colloquial) question.

Speaking of overhead, I feel like Rust generally has a lot of overhead (not as in execution time at runtime or memory consumtion, but in other ways such as binary size or complexity of the compiler). Even small programs I developed in Rust quickly result in a huge dependency tree. I also found the following note from another thread interesting:

I just tried to see how big hello world (with --release) is on my FreeBSD system. It is 3.5 MB big. Comparing that to a hello world C program (with -g -O2), which is just 15 kB (and that includes debugging code).

This is where I see one of the big challenges for Rust: Rust isn't lightweight, it seems.

Note that "overhead" isn't always bad. But I find it noteworthy that a compiled hello world program in Rust is (with default options) more than 240 times bigger than a hello world C program. How about hello world in Zig?

:floppy_disk: Please insert Hello World, Disc 2, and press any key to continue.

Edit: When I use cc -O2 -g -static, I get 3.9 MB for hello world in C, so my example isn't valid. But I still think Rust binaries may be bigger than C binaries in many cases?

Edit 2: I just noticed that my 3.5 MB hello world program in Rust wasn't statically linked:

% ldd hellorust 
hellorust: => /lib/ (0x8010b0000) => /lib/ (0x8010dd000) => /lib/ (0x8010f6000)

1 Like

I believe there's been cases of the compiler being able to better optimize some safe code than an unsafe version, which was supposed to be faster. So if you're adding a bit of unsafe to squeeze out more performance, be sure to time it to make sure it actually helped!


In the podcast, he says the following thing about his untagged union trick:

And here's the kicker: This code is safe in Zig, because we have safety on untagged unions. We have both, it's safe. We didn't have to give up safety to accomplish this.

I have since then came across this comment that explains how this actually works:

In a safe build mode, bare unions have an extra, secret tag that the language uses to insert runtime safety checks when union fields are accessed. This causes the array of 8-byte bare unions to actually become an array of 16-byte bare unions due to 8-byte alignment. A heavy cost to pay for this runtime safety.

But we don't ship a safe build of the Zig compiler to our users. We ship a ReleaseFast build. In this build mode, bare unions do not have a secret safety tag, and they do not have runtime safety checks. Accessing the wrong field in this case is Undefined Behavior.

If you want to claim that your language does something better than Rust without giving up safety in the process, then I find it really disingenuous if it turns out that you achieved it by using a different definition of "safe" that allows for UB in optimized builds. You certainly gave up safety in comparison to Rust.


I wonder what it actually means for one programming language to be faster than another?

I mean, a programming language is only a specification of a syntax and a semantics of some language. Basically it determines how you can arrange characters in a file of human readable text. As such that does not have a speed. In that same way as mathematical expressions do not have a speed. They just are.

On the other hand, the intention is that that mess of characters in a file can be transformed into a pile of binary instructions that a machine can step through.

The speed of that resulting binary clearly depends on the abilities of the compiler to turn source text into efficient machine code. So we see the compiler determines the speed of the executable. Not the programming language definition.

But, seems to me that some syntax and semantics may well make it easier for the compiler to produce efficient code than some other syntax and semantics. Is it so then that the programming language definition can help or hinder the final speed of execution?

Arguably a compiler could be built to transform any source text in any language into an optimally fast executable. But is it so that it becomes more difficult for some language definitions than others? So much so that it is easier to define a more helpful source syntax/semantics than to build that super clever compiler.

Why, for example can't one compile Java, or Javascript programs into binary executables as fast as C/C++/Rust?

1 Like

Do you remember a particular example?

Of course, unsafe Rust can also employ safe functions/methods. But it gives you the option to do stuff that you otherwise can't, so I'd (colloquially) say it's "faster". But yeah, I think there are a lot of surprises with unsafe Rust, and it wouldn't surprise me if there are many cases where it doesn't really make things faster. An example would be helpful here, if anyone has one?

1 Like

I presume Rust on FreeBSD has the same bug as on Linux, where release builds always get debug info of libstd, even if you tried to disable debug info. You can't disable it. It adds over 2MB of size to hello world. You have to use strip, otherwise comparison is not valid (it's just a debug info, it doesn't make language not lightweight, since it's not something that is executed, and you don't have to keep it — use strip).

Secondly, C even statically linked with its puny stdlib is not even equivalent to Rust linked with its much more comprehensive libstd. If you don't want to be comparing cost of unused code, use LTO.


Okay, let's use strip:

fn main() {
    println!("Hello, world!");

I'm using:

% rustc --version
rustc 1.60.0-nightly (bd3cb5256 2022-01-16)
% cargo --version
cargo 1.60.0-nightly (06b9d3174 2022-01-11)

Compiling with cargo --release gives a 3695600 bytes big binary (3.5 MiB). After I use strip, it shrinks to 307168 bytes, which are 300 kiB.

Now to the C version:

#include <stdio.h>

int main() {
    printf("Hello World!\n");

Note the code is longer :grin:.

I'm using:

% cc --version
FreeBSD clang version 11.0.1 ( llvmorg-11.0.1-0-g43ff75f2c3fe)
Target: x86_64-unknown-freebsd13.0
Thread model: posix
InstalledDir: /usr/bin

Compiling with cc -Wall -g -O2 hello.c (note that this includes debug info!) gives a 15296 byte binary (14.9 kiB). After I use strip, it shrinks to 4976 bytes, which are 4.9 kiB.

Even after stripping, Rust's binary is 61.7 times bigger than the binary created using C.

I didn't want to say that the overhead isn't worth it. I just wanted to note that it is there.

What is LTO?

P.S.: Note that the Rust binary (as well as the C binary) is dynamically linked with libc (see my pasted ldd output in my previous post).


This has been discussed many times. It's not showing what you're tying to show, only that you're building different things in different ways. Your C executable links to a 30MB dynamic library. You're not using LTO, so Rust also includes support for panicking, stack traces, debug info parsing, strings, vectors, and a ton of other stuff. Rust pays a hefty price to "handle" errors when printing to stdout, which the C version doesn't do.

And even with all of that, it's still one-time cost. It doesn't have to be executed on startup (modern OSes may not even load it from disk at all). It doesn't grow linearly with code size, so the multiplying factor applies only to hello world, and goes towards zero for everything else.

I am bothered by the extra overhead Rust's libstd and debug info add (which is why I created that other thread about removing backtrace costs), but let's be real that for majority of applications it doesn't matter.


Regarding my quantification of this (minimal) example, I was comparing default settings:

I assume that this overhead isn't inherent to Rust (e.g. you can also use no_std), so yeah, maybe my comparison here wasn't fair.

Anyway, I still feel like Rust has more overhead than C. And again: That isn't necessarily bad.


Could you (or someone else) provide an example (or provide a link to one) of how to make a smaller hello world binary?

1 Like

I tried to enable LTO:

strip = true
opt-level = "z"
lto = true

This reduces binary size to 272904 bytes (266.5 kiB). Still big.

Furthermore adding

codegen-units = 1
panic = "abort"

gets me down to 254992 bytes (249.0 kiB).

Using -Os for the C hello world (plus strip) gives me a 4960 bytes binary.

Anyway, maybe this isn't the right thing to be discussed in this topic, and perhaps it has been discussed before plenty. Thanks for the link in that matter, I'll read into it.

Normally I'm not really worried about binary sizes, except when doing something for microcontrollers / embedded systems. But I'm sure Rust can produce small binaries for those with the right settings.

1 Like

It isn't entirely fair as the rust version statically links to the formatting machinery, while the C version dynamically links to printf. In addition the rust version is capable of printing nice panic messages when aborting, while the C version isn't. Using -Zbuild-std -Zbuild-std-features=panic_immediate_abort to disable panic messages results in a 47k binary with LTO enabled and 67k with LTO disabled after stripping. This includes code to ensure that stdin, stdout and stderr are open, to give an understandable message on stackoverflow rather than SIGSEGV. These aren't done by libc and the lack of the former can cause security issues.


Of course there are reasons why the Rust binary is bigger. If I erased all the structural differences in my observation then, well, the binaries would be of equal size.

But in the end, the binary I created with Rust is bigger. :man_shrugging:


Getting back to the original topic of this thread, "Is zig lang faster than rust?", isn't "unfairness" inherent when we compare languages with a different focus? It's also unfair if I compare the speed of C with the speed of Python, because they work very different. But there's still a difference in speed, and there is a reason why we compare speed between these two very different languages.

When talking about "overhead", I'm coming from C and Lua and usually work with a very small dependency tree. Since I use Rust, that has drastically changed. For example, cargo tree easily gives me a long list of crates that I don't even know (env_logger:thinking:). That, of course, isn't inherent to Rust either, as it's up to me which crates I use and which not. Anyway, I feel like Rust comes with more overhead than I'm used to. And I would like to repeat for a third(?) time that this isn't necessarily bad!

Anyway, may I ask the question: Does Zig have less overhead than Rust?

Of course, "overhead" is a vague term. And I don't really know Zig, so I have no idea if this hypothesis is true. But I know Rust, and I feel like things quickly get complex in Rust. I'm not surprised that Rust's hello world is 60 times bigger than when I do it in C. Even if (or especially as) this comparison is unfair.

1 Like