No. Rust has one of the best cross-compilation support of the mainstream systems languages.
No language is going to give you this. This doesn't depend on the language but on the operating systems. Windows' loader doesn't understand how to load Linux binaries, nor does the Linux loader understand how to load macOS executables. You are going to need three separate binaries for the three platforms, and that'd be the same situation were you writing in C or C++ or $LANGUAGE.
In Java, the virtual machine is platform-specific. Java bytecode is only portable to machines with a VM that can run it. Same thing with browsers being platform-specific. The only way to be platform-agnostic is to bypass the OS (i.e. by being an OS) or to translate the OS. Sometimes the OS provides this translation, like how you only need one binary for any Linux OS, or how Windows has WSL. You can make binaries that work on multiple platforms, but those just include a separate translation layer for each supported OS, and sometimes just has whole separate binaries packed into one file.
Rust takes a mostly minimal approach: you get a binary that works on one OS, and as a result, the binary is well-optimized for that OS, builds fast, has no dependencies, and is not too large. Theoretically, you can compile Rust to wasm and only require the user has a wasm VM, but right now, you'll probably need to distribute the wasm VM as well (unless you're targeting the browser), and only certain code can be compiled to wasm.
There's also the non-technical limitation of OSes having build tools that only work on their own OS.
It's better than C/C++, and that's it. Cargo is absolutely helpless about even basics like choosing a working linker/sysroot.
Zig, golang work out of the box. Cargo needs to be manually configured for every platform pair, and cross-compiling outside of Linux, or two Apple platforms on an Apple host, is pretty sad. The most popular solution - rust cross - isn't actually cross-compiling, and just runs same-host compilers in virtual machines.
I know it's not really a language, but it's something Rust could target. Also, in fairness, this only used to be mostly true  before Apple mucked everything up by ditching x86. But, it does prove that you can have binaries that work on multiple operating systems.
Edit: Also, I do think it's worth keeping in mind that as long as Zig and Go have a better cross-compilation story than Rust does, then Rust could stand to improve. I'm aware that it's incredibly difficult, so I'm not saying it should be a priority, but I don't feel like it should be dismissed.
And I would say that it's precisely what you are supposed to do. OS vendors don't support cross-compilation. Only Apple actively fights it, but Linux distros and Microsoft don't consider it a problem if any update (security-related or otherwise) breaks cross-compilation.
Thus you end up with only possible options:
You just accept that and use VM to create releases.
You spend significant amount of time and achieve subpar results (cli more-or-less works, because nobody spends time improving it and thus breaking it, but gui is a disaster).
Go/Zig have picked #2 choice while C++ and Rust go with #1. This gives Go/Zig the ability to easily create command-line utilities but makes it very hard to develop anything that actually integrates more deeply with OS. C++/Rust allow you to create perfectly native applications, but as a result you have to use VM for compilation.
Java tried to achieve both and anded up achieving neither: Java apps are foreign-looking on any OS yet they achieve some decent level of integration. So instead of having easy-to-build situation or well-integrated-with-OS situation you have something in the middle.
That whole cosmocc/APE thing is a massive hack that uses the toolchain and the loaders in a way that they were absolutely never intended to. I wouldn't trust my $anything with an executable like that. When I'm writing in Rust, I'm doing it for the safety and correctness. The last thing I want from the compiler is to produce a "this is possible" proof-of-concept and then crash upon the first nontrivial program.
I love neat hacks that expand the boundaries of computing. But they positively do not belong in production.
My issue is that the defaults for cross-compilation are obviously wrong, because they don't exist — Cargo just runs cc. It doesn't even try to do a sensible thing on platforms that Rust supports. It does nothing even on OSes where cross-compilation is supported, like Debian/Ubuntu with multiarch.
It's annoying to set up the flags for every project for every environment. They're not always obvious, and I waste time trying to guess and reinvent something that should be standard.
Cargo is very naive about linking, and easy to break. For example, recently I was fighting it when building in Xcode for iOS/WatchOS/tvOS target, since Xcode modifies PATH to include the other platform's SDK, and Cargo naively launching cc from PATH runs a wrong linker that does not support macOS, so cross-building Cargo failed to link build.rs and proc macros even for the host platform.
This is also jarring compared to the cccrate. That crate goes pretty far to find the proper compiler. This has traditionally been the exact same problem "what's the issue? can't you set the CC env?", but the cc crate is smart enough to even search Windows registry to find link.exe, instead of relying on the standard vcvars.all.
IMHO Cargo should have some of these smarts for linkers, and not be like in C "of course it's broken out of the box, and you have to configure it each time, this is how our grandfathers did it, and they've liked it!"
Cargo never invokes the linker. Rustc and the cc crate do. Rustc uses the cc crate for finding the linker with MSVC by reading the registry. Apart from that it just uses the linker specified in the target spec without any of the smarts of the cc crate. The linker specified in the target spec is whichever one should be used for native compilation except on a couple of targets without libc (as well as wasm32-wasi) where rust-lld is used instead. And for targets where we bundle libc (windows mingw, musl) we still need the gcc/clang linker driver, but do tell it to use the bundled libc instead, so it works for as long as the binary format and the target architecture matches the host.
Also for windows specifically we implemented raw-dylib, which will allow linking for windows without requiring any of the import libs that microsoft doesn't want people to redistribute once the ecosystem (mainly microsoft's windows and windows-sys crates) adapts it. Just rust-lld should be enough then in it's link.exe flavor.
Go handles cross-compilation by skipping the platform linker entirely and making it impossible to link against any C code other than the system libraries in turn. Except when using cgo, but then you need a C toolchain for the target anyway just like rust, but without any help from a bundled musl.
Zig literally bundles clang (as linker driver and C compiler) and as I understand it bundles what are effectively import library as well as header files for linux, windows mingw and macOS. (kind of like what raw-dylib is intended to do) I'm not sure how they handle other OSes.
Traditionally, when compiler users complained "the error messages are terrible!" the answer was "that's because we're using yacc", and left at that. When users said "I can't easily use libraries", the answer was "that's because it's system dependent, and you need a package manager, and on Windows it's even more complicated", and left at that.
But Rust has been generally pretty ambitious with developer experience, and fixed these things very well, even if that needed rewriting or reinventing the stack underneath it.
So when users say "cross-compilation experience is poor", and the answer is "that's because rustc is calling the platform linker", I'm really worried that it's going the traditional C way: just explained why it's messy, and left like that.
In this forum people come with questions like "How do I build for Raspberry PI?" or "I'm on macOS/Windows, how do I make a Linux executable?" and Rust doesn't have an answer to these things, other than giving up on cross-compilation and using a VM.
I think inheriting the C linking toolchain created a blind spot for quality-of-implementation for Rust. Rust managed to create wonderful tooling and seamless experience for areas it took ownership of. Unfortunately, Rust did not take ownership of the linking, apart from dabbling a little in use of lld, which ironically doesn't work well on all platforms, and requires manual configuration for cross-compiling too.
Yes, some people come and ask but how many of them are ready to pay for that (not necessarily money, spending your time fixing bugs which happen because of platform change is payment, too)?
I just don't think there are enough of people who care.
And yet it took almost a year to add if let formatting support for cargo format.
Rust managed to produce worderfully working tools precisely because it doesn't try to support all the weird things that it possibly could and thus have time and resources to concentrate on things that majority of users need.
As for cross-platform support… I'm not sure there are enough people who want to support that mess in working order.
With cc they kinda have no choice, because almost every user of root need to compile one C library or another, eventually. Lots of people who may benefit and enough to keep that ball rolling.
With cross-compilation… I have no idea if there are enough people who want/need to use that stuff to pay (one way or another) for it.
A lot of the work required to port rust to a new target is precisely because we are doing things our own way (in the hope of it enabling easier cross-compilation in the future) too much rather than using the systen toolchain. Porting to a new target requires writing new libc bindings from scratch rather than generating bindings from the host C headers (this also results in issues because FreeBSD has libc ABI changes in every major version. so far we are just lucky we could just rely on older versions of the symbols for compat with multiple FreeBSD versions.
) You have to change the list of C libraries to link in libstd as we pass the linker driver flag to disable all default system libraries to link.
And even then we rely on the system linker driver for a lot of things. And especially on Nix or Guix bypassing the system linker driver would cause an incorrect dynamic linker path to be passed to the linker unless specific support for those OSes is added. But even outside that we rely on it to pass all flags necessary for successful compilation to the linker as well as to find all system libraries and to follow the distro policies around linker flags like enabling build-id. We could ship with our own copy of clang as linker driver like Zig does, but that wouldn't work for targets only supported by GCC.
What I'm trying to say here is that there is a huge tradeoff to be made between not depending on the system toolchain for easier cross-compilation and completely depending on it for significantly easier porting to new targets.
Supporting just Windows and macOS without a native toolchain should solve cross-compilation for most people and would be a lot easier than handling all targets this way. For Windows we have MinGW support already, bundle MinGW and only depend on a PE linker which our bundled lld can provide for us if we want. For macOS we did have to add support for raw-dylib generating .tbd import files and modifying the target spec to pass the right linker options directly to the linker.
As said, a single binary can't run on different machines, but you can absolutely generate binaries for different machines on a single machine. On ARM Mac I regularly generate binaries for macOS, Linux and Windows, for both AMD64 and ARM for each of them.
The first step you only need to do once, is to add toolchains, and for Windows, one needs to compile proper develop environment, so this has to be run:
cargo install cross --git https://github.com/cross-rs/cross
rustup target add x86_64-unknown-linux-gnu
rustup target add aarch64-unknown-linux-gnu
rustup target add x86_64-pc-windows-msvc
rustup target add aarch64-pc-windows-msvc
rustup target add x86_64-apple-darwin
rustup target add x86_64-apple-darwin
rustup target add aarch64-apple-darwin
# Then we install cross-rs source, so we can build
(cd ..; if [ -d cross ]; then cd cross; git pull; else git clone https://github.com/cross-rs/cross; fi)
(cd ../cross; git submodule update --init --remote; cargo build-docker-image x86_64-pc-windows-msvc-cross --tag local; cargo build-docker-image aarch64-pc-windows-msvc-cross --tag local)
Once you have it ready, you can just build the binaries with Cross:
While Go works great without this hassle... it only works great until one hits CGO dependency (or when one uses it in the core code). I build Go, too, but "fun" with compiling Go for Mac and Windows in Docker, including re-compiling LLVM to build Mac ARM is way beyond the slight nuisance of setting up Cross in Rust. So I'll go on a limb and claim Rust is one of the better places to be when cross-platform compiling is required.
In my first message I've mentioned cross. Note that cross doesn't use Rust's cross-compilation capabilities. It launches virtual machines instead.
So I don't take existence of cross as "cross-compilation is easy". I take it as "actual cross compilation in Rust is so annoyingly hard, that even a dedicated cross-compilation project has completely given up on using it".
I think use of Docker is telling. In C projects where dependency management is hard, many projects find Docker easier. They say don't bother installing dependencies on your actual host operating system — just run this Docker image, easy, problem solved! OTOH Cargo made dependencies actually easy, so users are instructed to just run cargo build natively on their host OS, no Docker needed. So I think Rust will have solved cross-compilation when the solution will be to run cargo build --target natively on the host OS, not cross build --target in that uses Docker.