Why must everyone have a private copy of all rust tools?

So our standard practice requires what I would call a "tight configuration management regime:

For example - We have a "VM farm" - with all tools pre installed (rather beefy VMs)
We do not have or use DOCKER - because we control our VM and automate their creation.

we have a HUGE high performance NFS server for shared things ( a huge NAS )
We put common tools in a common place
For example we have /nfs/tools/Xilinx and under that we have multiple versions
Same with /nfs/tools/microsemi {now microchip}
and /nfs/tools/Aldec
and .. the list goes on.

For security reasons - policy is: users shall not and are actively prevented from installing any tool or package period.

We also require a rather locked down "configuration management system" - ie: Tools and packages are requested, approved, reviewed, and installed

We also very much so want to respect copyrights (while we are contractually obligated by our customers, we also think it is the right/moral thing to do too) so -nobody can just download any package without a request, review, approval, etc before we begin.

BUT rust says: Just download it and enjoy.. This is so antithetical to any and all things I would think RUST is about - ie: Type safety, and security - and for example anything about respecting copyrights of others code.

its like everything about rust is nothing but 100% wild-wild-west, and cowboy methods. The exact things they teach you not to do in any large development group.

How exactly is one suppose to setup a properly configured, "configuration managed rust development environment" - if everyone must personally install RUST and personally install each and every component.

And that includes all of our CI/CD worker machines - really, why can't we have the tools pre-installed?

This is just so wild west and cowboy that I makes my skin crawl.

You don’t have to use rustup.

Every rustup-installable tool or library is a tar archive, and the software inside will tolerate being installed anywhere. You’re going to need the compiler, the standard library, and cargo to build anything. These archives are listed in

The only tricky part is that you must merge all the rust packages into a single directory (called your “sysroot”). This is pretty easy to do with most command-line archive tools, but GUIs don’t usually make this easy.

The install script from the rust Chocolatey packages shows an example of how to do this in PowerShell. The tar archive itself also contains a sample bash script.

6 Likes

why is this not reasonably described under the installation process

and the only way to know of this method is because you happened to see this

i am serious in saying there is money to be made packaging this in a way some corps can accept

thats why tools like keil and iar exist..

Time and interest, I expect. Much of Rust development is self-organized and self-directed, and while there are some fairly large corporate deployments of Rust, there hasn't been a huge amount of interest in the kind of installer you're asking about, so I'd be surprised if anyone were motivated to write it. That could change, though.

The complete list of supported installation methods includes more than Rustup - there are a number of freestanding options that are supported, if you'd prefer not to roll your own. However, if you need to customize the installation beyond what the freestanding installers provide, @notriddle's option - unsupported though it may be - will see you through, and is in practice likely to be stable for the indefinite future.

Where have you been for the last forty years?

Back in the day the mainframe and central systems ruled the roost. Everything was centralised. Everything was installed and maintained by the high priest "operators". Mere users (developers) could only borrow tools and services under strict rules. See Bastard Operator From Hell - Wikipedia

With the arrival of the personal computer, Unix and the GNU that all changed. Users (developers) took control of their own destiny. The software world flourished like flowers in desert after rain. Bringing us the fabulous world of software we have today,

More seriously, I do appreciate your concerns about stability, traceability and so on. Very important for many industries. What I do wonder though is how you deal with other languages like C and C++? They are also the "wild west" you describe. It's not just a Rust phenomena by any means. dustup, cargo and all are a welcome bonus for Rust users but hardly necessary or essential. One could proceed as one does in the C/C++ world, use rustc directly, use Make or the build systems.

I believe that's a question you should make to whoever manages the CI/CD environment, or whatever environment where you expect some tool to be pre-installed.

4 Likes

Because for every developer that needs that there are hundreds of developers who would rather download Rust and use it like it's described in all tutorials.

Even in the world where all developer work for a companies like yours (at my $DAY_JOB situation is similar, BTW).

Because only one or two developers from your organization would ever need to read the documentation about how Rust and think about how it may be installed in your environment – and then everyone else would read internal documentation that would explain how it works specifically for you.

But some of these developers may enjoy Rust outside of all that complexity, too – and then they would need “download and enjoy” kind of documentation.

In a sense Rust documentation for enterprises is not prominent because people enjoy Rust and use it without pressure from management.

If you see that some tool have “enterprise-grade” documentation front and center… you know immediately that people would hate that tool and the only way to convince them to use it is to exercise management pressure.

Sure. But these money, rightfully, belongs to companies like Ferrocene (do we know any others, BTW?). They provide certificates, support and other such things.

Keil and IAR are built around Clang and there are many more developers who use clang than developers who even know Keil and IAR exist.

It's the same with Rust.

P.S. Do we have other companies like Ferrocene, BTW? Or is the hope that eventually Keil and IAR would add Rust to their IDEs?

6 Likes

Sounds like you need to make friends with someone at https://oxide.computer. They are building server hardware and all the software to manage it in Rust. I'm sure they have all their Rust tools locked down.

Reminds me of the late 80's when I worked on in a secure military environment. All our tools and code were managed on a central Vax. We developers had our own set up on our PC's though. Mind you, no networking outside the office was allowed and our hard drives were removed from the PC's and stored in a safe every night!

Here's a different way to think about this approach, if you're actually interested in the reasoning:

Everyone, by default, has a simple way to get an up to date version regardless of platform directly from a verified source. That's a huge win compared to the standard approaches of "here's the source code, figure out how to build it"; or maybe you can manually download and unpack a bin in the right place, and set things up right but oops they really weren't kidding when they said "put it in ~/magic" it actually requires that, and you're screwed if it can't go there; or maybe there's some third party that's hard working enough to get an update out for your platform's package manager within the week.

Rust regularly releases valuable features on a regular cadence, that due to the above making such updates easy, are often quickly taken advantage of by new releases of crates. Many larger crates have an explicit MSRV (Minimum Supported Rust Version) but by no means all do. In other words, the ecosystem is built around build able to quickly make updates.

For the case of ensuring that everyone is using the same versions and components, rustup will also do that if you use a rust-toolchain.toml in your projects.

In a related ability, it can install multiple versions and features which can be selected on the fly so you can test the effect of an upgrade without risking any effects on other users or projects.

In short, while it may not suit your environment, rustup is hardly a tool about the "Wild West", when it automates away every personal choice a developer would otherwise have to make!


WRT to the talk about packages, that's cargo, strictly, not rustup or rust, but realistically it's a lot harder to use Rust without cargo, though not impossible. There are alternative Rust agnostic build tools (you're probably already using one), like Buck or Meson that make it a lot easier to use a "vendored" setup like you seem to have, but you'll be doing a lot more scutwork just reproducing a dependency tree description, and cargo crates are going to have a lot deeper and wider dependency trees than any C++ code you've ever seen, because they have a public package registry to lean on in the first place. More feasible is to have a private mirror with only approved crates on it, that's relatively a pretty simple setup.

If you haven't dealt with languages with a public package registry before, this setup will seem crazy: but it works: there's a lot of ecosystem effects and tooling to ensure that all the obvious issues either aren't a problem or are much less of a problem than you might think: certainly enough to be worth just how much functionality is available with so little effort. Since you mention copyright, all crates declare what license they're available under, they must all be at least open source to be on crates.io and nearly all are an extremely permissive license like Apache or MIT. People want you to use their code!

You may be interested in using cargo deny for this and other purposes

12 Likes

if they did it would simplfy things.
the other issue is us based tools

I was recently wondering whether the rust github repos are mirrored in repos hosted in other countries, do you know?

i am one of the ci/cd instigators

by making dev vms built with ansible, they are easy to replicate.
and we do the same with ci/cd machines.

imho docker exists because people needed a way to control the environment..
the IT dept would not let them have a root like environ and every developer set up machines their own way it was chaos..

so being engineers they engineered around the it problem by using docker
identical to ci/cd machines

dont misunderstand i believe docker is great for a micro service you interact that stands isolated. with but not great for ide development (i include emacs as an ide, the debugger being GUD mode)

if you can compile only in the container great… but the compiled in debug source paths are not the same outside the container.

if you have ever compiled on linux and debugged from windows you understand the issue

but all of these problems go away when all vms (and build machines are controlled and identical) because they are are built via a script. and you are not an admin dick-head… i mean i like emacs another guy likes Joe and others swear by/at vim or gvim so install all of them its easy.. dont put friction into the path.

so i donot see a need for containers

yes familiar tones here

where have you been for the last 40 years

Been developing embedded code, my rules for all teams I have lead have been very simple.

we have a dedicated "build machine" - you provide a batch or shell script that builds the code on that machine and only that machine. One guy said oh the script must "rsh" into my machine. Nope put a quick stop to that we are not doing that.

Its the same machine setup you got when you started. The requirement is that a second person, the release person cannot easily rebuild your code with the script provided by you must be fixed, your job is NOT done until the script works.

I've been places where you have a "word document" - a README, or an email that describes the process, nope none of that - the word document the wikipage does not have to be correct nor does it have to be updated.

But that script - it has to work - so I have it documented correctly, comments in the script may suck or be non-existent - but the scripts and thus instructions to build are present and correct!

As they say: USE THE SOURCE LUKE.

Does not matter if you are building an FPGA bitstream, a windows application, or an embedded flash image. it shall be built with a shell script or batch file when you are done

our rule is very simple:
a) You have access to a representative "build machine" with tools pre-installed
In our case a VM machine

b) checkout ONE thing from version control via a TAG name

c) Run a batch file or shell script contained in that version control system

d) If the package requires sub modules then you script needs to check them out.

e) BUT - those sub-checkouts shall be via TAG name also.

f) Then build - the script should do that too.

g) Wait till done and Collect the result file or files - you are done.

This is very simple in gitlab, you the runner executes a shell script (or batch file for windows).

I want the same thing for RUST stuff.

I do not think this requirement is "not-reasonable" nor do I think it is hard.

Everything is simple if someone else does it.

If you really think it's not hard – then do it! All the sources for all the tools are out there and you can make this happen.

And if you couldn't make it happen… then it's hard, by definition – and then question that arises is obvious: why do you think someone else should do that, instead of you… and for free?

8 Likes

Rust even in the default rustup setup doesn't (and really can't) completely eliminate the source of those environment dependency problems, but it gives a lot of tools that makes them a lot simpler to deal with.

First and foremost, there's effectively only one Rust compiler, for better or worse, and it's extremely good at compatibility, certainly in comparison to C++ compilers! (In particular, the way it does editions is far better than a "target standard" option). This means even without pinning a specific compiler version, the only likely issue you are to hit related to versions is that someone started using a feature from a newer version than you have, which you can fix by running rustup update (which you should be doing regularly anyway to get those cool new features). Or, of course, just pin the tool chain for the project.

Secondly, Rust tries as best it can to reasonably avoid depending on platform provided toolchains, not targeting a C compiler or assembler for example means it can cross compile - though it does require at least a platform linker on most platforms simply because there's no other supported method for many of them to produce working dynamic libraries, etc. (I believe on Mac and Linux, and to a certain extent (for now) Windows, you can even avoid that and use a "pure Rust target", to further reduce platform dependency, via various build configuration options, though I didn't think it's got great support yet?)

Crates are fetched and built from source so there's no chance of a mismatch between what the source says and the build, or compatibility issues between the crate CI setup and the actual build, etc.

Crates are generally expected to not depend on platform development libraries - most are "pure Rust" and will only fail to build on a platform if the code explicitly doesn't support it - for example if you try to call a function that is only implemented on Mac in code without it's own #[cfg(target_os="darwin")]. Others may be a wrapper (either thin or "idiomatic") over a C or other language library; here generally they should document their requirements, but often they will vendor a specific version of that library's source and require only a compatible C compiler on the system - In particular, most people use the cc crate to handle discovery and configuration of the compiler which reduces the number of times you hit ad-hoc quirkiness. Other times it is configurable whether it uses the platform's library or the vendored one. Generally, this is a best effort to "do the right thing", but the general tooling for both Rust and platforms in general is enough better than the state of the art when many C libraries were becoming popular that said best effort is actually pretty good, not like autoconf nightmares of old at all.

1 Like

This has nothing to do with my observation though. Do you want to use a common setup for you devs? Fine! But someone had to create the base VM that's installed on everyone's machine. Who build that, and with that software preinstalled? That's the key part: it you want Rust preinstalled in it then you'll need that someone to rebuild the VM with Rust preinstalled, in a way that's no different than any other software.

1 Like

It's matter of perspective and the source of @ZiCog's “Where have you been for the last forty years?” question.

If “any other software”, for you, is CTAN, CPAN, CRAN and all other repos that followed… then nope, Rust is not any different from other languages. Most popular languages, these days, work like that, C/C++ are rare exception.

If “any other software”, for you, is “a typical embedded SDK” where a lump of software is delivered to your and never updated (the ultimate version that I saw once arrived as two PCs – all the software was pre-installed on them with one carrying specific version of Linux and another specific version of Windows and they included spare HDDs for the case if one of them would die… of course use of anything but these two specific PCs wasn't supported or encouraged) – then Rust is radically different: you have always use C/C++ and SDKs built around C/C++ and never cared about the fact that you are rare exception… why should you, suddenly, change your whole worldview?

And it's not as if Rust can not be delivered in an SDK form. One certainly can download and embed all the Rust tools to create a frozen-in-time artifact… but if you expect such artifact to be “the normal way of doing things” then nope, Rust *is different: Rust caters for individual developers first (because most Rust developers start as in individual developers, even if, later, they start working for a team), and other type of installation second (but they are supported, they are just not the norm that typical Rust developer need to think about).

P.S. And it's not “last 40 years”, but more of “last 30 years”, apparently. Communities started organizing repos in the middle of 1990th, that's 30 years ago, not 40. Most languages got one central repo, but C/C++ were unfortunate: sure, it got system of ports](FreeBSD Ports - Wikipedia), but these were always tied to one, fixed OS (alhough some OSes share these repos), OS-agnostic repos arrived much, much, MUCH later (and were never able to coalesce into one definitive repo)… C/C++ is very much an exception – yet people who mostly use C/C++ perceive is as the norm.

yea so do you want to sit in a court room across from a crying set of parents and explain why the stuff you you built yesterday and validated … now is wrong and killed their baby

or perhaps a secure system was cracked because of a bug because somebody updated something and you thought it was ok cause its just a simple fix a typo…

so yea what you describe is acceptable for a college student. and a research project.

but there are others who live and work in a different set of regulated industries then you do. your comments and attitude is they are dumb and should just trust it. sorry trust but verified wins here

is it acceptable for the consequences being your dead baby, a bank security system that was cracked or that medical device that just failed that kept your mom alive. what if was your software that caused the next nasa human space flight vehicle to catastrophically fail in a bad death way. do you want that risk? how do you de-risk that process?

that is a very different level of scrutiny do you want to be there?

what if that's what you are creating?

that's the type of environment people want and are asking for.

your answer is “trust me” that is not acceptable when human life is at stake.

the question and answer is this: i take no chances if human life is at risk.

decisions like this are made because bad things happen.

with c code i can lockdown the tools a little more easily it is disheartening to hear the
phrase: this is a best effort to "do the right thing"

i must live and work in a very different world then you. for these reasons rust and the crate system seems to be pure cowboy and wild west when it claims not to be.

Cargo projects has a lock file. This (assuming no C dependencies) ensures reproducible builds. Depending on a random VM someone set up seems like a terrible idea. Rather you should test for and ensure you get bit identical builds when details in the environment changes. This way your build only depend on what commit you are on in the repo.

In your scenario you would also want to add a tool chain file to specify which exact tool chain to use, (not just the packages).

But I sugest you would be better served by using Ferrocene than the standard rust tool chain, since that will give you a verified toolchain with all the legal rubberstamped paperwork.

1 Like