As if everything you said wasn’t enough already, I think it’s worth saying that these three points are often just as true of standard libraries as they are of third-party libraries.
For 1 and 2, the people that control what goes into a standard library are usually also spending a lot of their time on the core language, while a 3rd party library maintainer is much more likely to be spending all their coding time maintaining just that library. Like how serde replaced rustc_serialize.
For 3, I’m sure most of us can point to some silly duplication in some standard library we’ve worked with. An especially easy target would be C++ having both the printf family and iostreams.
Really, in C++ there are huge chunks of std that exist only because C++ has no package management ecosystem so all third party code is instantly a non-starter for many users. For instance, C++ standardized <random>, but Rust will probably never pull rand into std because there’s simply no reason to. Oh, and that’s also another example of how Rust is not “just C++2.0”, but fundamentally different and better for it.
That’s a long post, I’ll try to it justice by replying to as many of your points as time allows:
Recently I had a funny experience, where I spotted someone writing some PowerShell from across the room, but too far away to actually read their code. I could tell from the colour scheme that it was the PowerShell ISE screen, but nothing else specific. I could tell – from that distance – that they were a former VBScripter writing scripts exactly like they used to before, except in a new language. They hadn’t embraced any of the PS “way of doing things”, they were just writing their old VBScripts in the new syntax. On closer inspection, my suspicions were verified, the script they were writing looked like it had been mechanically translated line-by-line from a VB script, despite being new code dealing with Office 365 automation.
Once you learn enough languages, and have sufficient decades of experience under your belt, you can get very good at spotting the “accents” of other developers. My main complaint with Rust is that most of its core developers have a strong “C++ accent”. In my mind, Rust 1.x isn’t Rust enough. You can argue all you want that I’m wrong about this, but to me it looks like C++ 2.0 from a mile away.
Yes! This is why I don’t like the current String and str types, especially that the latter is a built-in type like i32 despite being rather complex under the hood. I don’t like that UTF-8 as a memory layout was basically forced on everybody, when UTF-16 is still all too common with Java, C#, and Win32 interop – none of which is going away any time soon. This is the same mistake the Haskell guys made by forcing a specific string implementation on people when it was clearly wrong. Strings are complex enough to warrant an interface.
Rust should have used a set of string traits only, and set things up from the get-go to be 100% smooth across a range of string encodings, including special cases such as &[char], Iterator<char>, compressed strings , UTF-16, Win32 Latin1, etc…
This doesn’t mean that the standard library has to implement every possible encoding, just that it should have prepared the ground for libraries or std v2.0 to fill the gaps. Right now, things are… a mess. In an earlier post I highlighted issues such as a.foo(b) working but b.foo(a) failing to compile. This is the tip of the design iceberg. It may look small, but it says a lot about what’s going on under the water. Crates won’t fix this.
Someone mentioned that this slim std-lib decision was a necessary evil to ship Rust 1.0, and this limitation has been incorrectly embraced as a virtue by many people. A thin standard library is one of the reasons I abandoned C++. My productivity in other languages is at a minimum 5x better than in C++, and most of that is due to the richer libraries available out of the box.
Unless you mean C or C++, documentation isn’t exactly uncommon… Even there, documentation is often quite good, I used to use Doxygen back in 2001 or so along with many other people. Standardised “comment-based docs” are over 20 years old now, both Java and C# 1.0 had this.
I like that you play devil’s advocate to your own post, that’s very scientific of you. 8)
However, to add the “peer review” aspect to that scientific bent:
Maintenance: In my experience with all third-party module systems such as cargo, or npm, or whatever is not so much that the maintenance is hit & miss, but that this is not self-evident. How do you know if a module is maintained or not? How do you know that if there is a tiny but critical problem that you can get pull request merged without actually trying? How do you know that despite a 10-year history the dev has recently gotten a new job and has put tools down? This is bad enough if you pull in a large dependency such as actix or diesel, but what about all the transitive dependencies? Are they all high-quality? Production-ready? Safe? Maintained? Future-proof? Consistent with other transitive dependencies you’ll be pulling in indirectly via other crates?
Poor Quality: This is a much bigger deal than it sounds. For example, several people have pointed out that short of reading through all code of all transitive dependencies, you have no idea if there is unsafe or panic somewhere in an innocent-looking library that will crash your application process. Neither rust nor cargoproperly handle this. Oh sure, there’s the unsafe keyword, but this isn’t “bubbled up”, like with C# where you have to mark the entire library as “compile with /unsafe”. I just discovered that the mmap crate doesn’t support 32-bit! Err… what? Compared to random landmines like that, I know that Rust’s std library has been tested with 32-bit. I know that Microsoft has tested C# with 32-bit. How do I know that every transitive dependency will work on 32-bit platforms if 99% of Rust developers are using 64-bit operating systems to develop their crates?
Incompatibility: This is just goint to get worse. Even some trivial modules are pulling in a dozen or more dependencies, and those in turn are pulling in more, which in turn… ugh. The NPM fiascos have shown that this just leads to madness at scale.
I have a feeling that module systems like cargo.io simply don’t scale as currently designed. There’s a honeymoon period that just doesn’t last once the real-world kicks in. Based on just observing the mess that is NPM, the following features at a minimum really ought to have been included in Cargo from day #1, but apparently stability and safety just aren’t priorities right now for a language aiming squarely at web developers and systems programmers:
Code signing or some sort of method for securely verifying the origin of code.
Some clear – or better yet – enforced way to verify that what’s on GitHub matches what’s on cargo.io.
Some sort of namespace system to avoid typosquatting, and random low-quality crates permanently taking every common dictionary word. There really ought to be an “official” rust library prefix, at the very least. E.g.: rust-std/uuid or instead of just uuid, which could be anything written by anyone.
Some method to handle renames of crates, such as using GUIDs as the real crate identifier, and the display name used only during the initial search.
Compatibility flags on all crates, such as the required rustc version, std or core compatible, 32-bit or 64-bit, x86 or ARM, SSE2 or AVX, etc, etc…
The popularity of a crate (number of downloads, etc…) so you can judge how many people use it.
Whether it is prerelease or not, including all transitive dependencies.
Automated builds or unit tests vs various rustc versions, verifying compatibility.
Some people in this thread mentioned that both Java and C# are going down the same path with things like NuGet. In my opinion, this is terrible for the future of those languages. The quality has taken a massive nosedive. I never had to worry about the compiler throwing random internal errors with C# before, but I do now. I recently had to try and work out why a transitive dependency was causing dotnet core to fail, and I basically couldn’t work it out after weeks of research. Other people hit the same dead end.
That! That is the crux of the problem! Why is this extra thing necessary? Why doesn’t crates.io already have this as a built-in feature? Why can’t creates have some sort of “official seal of approval”? Why can’t we determine if it’s safe to include a crate without having to trawl through web pages manually?
How do you know that some 3 year old block of code you pick up and compile hasn’t suddenly been p0wned via some transitive dependency?
I can’t deny that the current design of Rust is very inspired by C++, and there are certainly areas where (as you say) I wish Rust went further than it currently does, though I think “C++ 2.0” doesn’t quite do Rust justice.
On the other hand, UTF-8 is very common on Unixes and with network code, so choosing UTF-16 isn’t correct either.
You are right that it would be great if Rust had a more generic system for handling strings (including different encodings).
Isn’t that more of a failing of C++, rather than a failing of package management in general? As long as Rust has a rich set of blessed libraries, isn’t that equivalent to having a rich stdlib?
However, I wasn’t saying that Rust docs are good and other languages aren’t. I was saying that because Rust docs are good, that makes crates more similar to the stdlib. Of course that also applies to other languages that have good docs.
My goal has always been to make good arguments, not to win arguments, so I consider that a compliment.
Those are all good points, which I think can be solved by the community: crates.io listing actively maintained crates, putting more statistics on crates.io, using more badges, etc. There’s actually been some discussion about that recently. I think there’s definitely a lot that can be improved!
I’m not sure if that’s a good counter-argument. In any system with third-party packages you can find bad code. My point was that the popular crates should be roughly the same quality as the stdlib.
As for some of your specific points: I think a lot of it can be solved with lints, and tools that can analyze an entire cargo dependency tree to display useful information (such as the amount of unsafe code, etc.). There’s also work being done on a “portability lint” that should improve the situation that you mentioned with 32-bit vs 64-bit (among other things).
I’m 100% with you about the suckage of npm, though I would like to point out that Rust’s design, Cargo’s design, and the overall Rust culture and ecosystem isn’t the same as npm, so I’m cautiously optimistic.
That would have dramatically delayed the release of Rust 1.0, so I think it’s unrealistic to expect them to have been there from the start. Once things settle down and people have more bandwidth available, there’s certainly the possibility of improving things!
Let me address some of your specific suggestions:
I assume you mean some sort of hash/checksum? That sounds perfectly reasonable to me.
Why does this matter? When you publish a crate, the entire source code gets uploaded to crates.io, so there’s no connection at all to GitHub.
I strongly agree with this. It should really work like GitHub: user-name/package-name.
Doesn’t a namespace system basically remove the need for GUIDs?
I agree with this, though I suspect it’ll be quite difficult to do it right.
This is already implemented and working on crates.io. In fact, it even displays a graph showing the number of downloads over time.
I haven’t tested it, but I believe it’s possible to use semver for that, e.g. 3.0.0-alpha
As for seeing whether transitive crates are pre-release or not, that sounds like a good idea (and not hard to add).
It’s not automatic, but crater does get run pretty regularly.
Of course individual crates can use continuous integration (like Travis), and many do.
I imagine it’s just because nobody’s done it yet. You have to keep in mind that Rust is still a small community, so we have to prioritize our time and energy. So a lot of “nice-to-have” features end up being passed over in favor of “we-need-it-now” features.
For applications there’s the Cargo.lock that prevents that, though I agree that it’s very scary for new applications.
Just FYI, Nix solves pretty much every problem you listed, it’s a super fantastic package manager, and they do have support for Rust. Unfortunately Nix doesn’t have great Windows support right now.
So, given that Nix was able to solve those problems, I think Cargo can solve them too, it will just take time and effort (which is in short supply right now, because of the imminent release of Rust 2018).
Conversely, my argument is that a rich set of blessed libraries is the standard library. Externalising bits and pieces into a package manager just makes it hard to see where the officially anointed libraries end and the heathen filth begins. 8)
I’m having exactly this experience with C# right now. For example: NuGet, just like cargo.io, fails to properly include the full set of “dependency” flags actually required in complex (real!) scenarios. Nice, easy versioning of “3.0”, “3.5”, “4.0”, etc… has gone out the window, and what I’m left with is a mess of incompatibilities that I shouldn’t have to deal with.
In my particular case, something like this* happened:
I’m writing a C# PowerShell module, which is basically just a DLL library. This part is easy.
This library has a NuGet dependency that in turn has a transitive dependency on a .NET Core module that is also a NuGet package. This is generally not optional now, because the dotnet core standard library is modular. So importing a dependency like System.Data.SqlClient will import System.Data behind the scenes. They’re independently versioned, just like cargo packages.
The linkage is dynamic and done at runtime, so neither NuGet nor the dotnet compiler have any idea what this dependency is exactly, because the pwsh runtime pulls in some specific version of the standard library modules.To work around the issues this causes, there is a wonderfully opaque set of compatibility shims all over the place.
If I stay on version 4.0 of the package I’m using, it doesn’t work, because of a bug in that version of the library.
If I “upgrade” to 4.5 everything breaks on Linux because some legacy compatibility shim was removed. This shim is still required by pwsh on Linux, but not Windows. I have no control over this, it’s a component of dotnet core pulled in by the pwsh executable, not me, but it’s incompatible with my DLL.
None of this is documented, reflected in NuGet.org, managed by nuget.exe, or in any reasonable way discoverable. This is 100% code written by one organisation, Microsoft. Half of it was written by one team (pwsh) and half it by just one other team (dotnet core). They’re likely working in the same building and yet there are already entire categories of scenarios where everything just blows up in my face unpredictably and there is nothing I can meaningfully do on my end to fix it. I literally tried every combination of csproj settings through brute force to see if it could be made to work. It can’t.
I did a lot of research, and I worked out that basically the pwsh team had likely never sat down to write a “third-party” cmdlet module in C# that uses a NuGet dependency. They’ve written a bunch of C# modules, but it’s all part of pwsh, and hence versioned in sync with it, so they’ve just never been exposed to the mess that they’ve created for everyone else, the 99.9% of the developers who will actually have to use this stuff.
I just have a feeling (perhaps unjustified), that a lot of Rust development is similar. It works in a controlled environment, but I doubt it can handle the combinatorial explosion that is already making NPM a nightmare for developers. The more popular cargo gets, the worse it will get.
Let me paint you two scenarios in analogy with my pwsh experience:
Scenario 1, the Servo team (or similar):
A medium-to-large team with a long-term project. Timelines measured in years.
A lot of overlap with the Rust core language team and the developers of the key crates.io packages that make up the “semi-standard” library of blessed modules. These guys likely often meet in person, work in the same building, or correspond on the internet on a regular basis. There is trust built up over years.
Dependencies change slowly, and they have months to patch up any small inconsistencies.
They directly control most of their dependencies, including transitive dependencies. The packages were written for Servo, or by a Servo team member, or someone in close collaboration. Something like 75-90% of the code is under their “control”. They know exactly what they’re importing and where it comes from.
The code is open source. It’s not “sold”, and even if it is, there’s a disclaimer that says that they’re not liable for anything. There’s no warranty.
For scenarios like above, I am in no way disputing that the Rust development environment “just works”. It would be a big step up from C++, provide a lot of flexibility, and generally enhance productivity. This is great. Game developers would be in a similar boat, and I’m sure there’s many more examples.
Scenario 2, an enterprise tool:
Lone developer or small team, some of whom… are not great developers. You can’t control everything, and the people you work with make mistakes or are just a bit sloppy. Maybe good, but overworked.
The goal is to plumb together a bunch of libraries. Feed XML containing JSON into a database. Make it authenticate with LDAP and SAML. Talk to a legacy Java app. Import from a mainframe. Etc…
Your dependencies are published by corporations, not “Bob” down the corridor. You have zero control over these packages. Pull requests are silently ignored, assuming it’s even open-source to begin with.
You’re stuck on an old compiler because of the above.
Timelines measured in weeks. If something breaks, you are screwed. Deadlines make wooshing noises and emergency meetings are scheduled to recur daily by project managers who don’t care about “package incompatibilities” and other meaningless technical talk.
You can’t spend significant time researching the pedigree of every dependency, there’s hundreds of transitive modules being pulled in, looking through them all would eat up your entire dev time budget before you wrote a single line of code.
Even if you miraculously do check everything, maintenance is done by a different team on a different continent. They’ll blindly pull in the latest “updates”. You have no control over this either, management six levels above you signed an outsourcing contract for BAU support.
This is going to be processing sensitive data worth millions. If it’s insecure or crashes, your managers will avoid all responsibility and blame you. You’re lucky if you’re only fired. You’ll likely avoid jail, but lawyers will probably get involved.
Now, in this scenario, Rust and Cargo are… not great. If this happened in scenario #2, that poor solo enterprise developer would not be a happy person:
A dev in scenario #1 is much less likely to be affected, and could just laugh something like this off. A dev in scenario #2 would just avoid Rust if he’s got any brains. I certainly would not use it as it stands, because there’s virtually zero protection from the kind of vulnerabilities that were not just predictable, but predicted, and have occurred for real. Why would I risk it? For what? Twice the runtime performance? Pfft… I could just request a server 2x as big and not risk my job and my career.
So imagine if I was a bad actor like that guy who put malware into the eslint NPM package. Just pretend that I’ve been contributing to cargo.io under a pseudonym (real names and code signing not required, remember!). There’s a popular package that I’ve uploaded years ago with a cutesy name that lots of people use.
I’m going to inject malware into it next time I fix a critical bug. This malware will steal github.org and cargo.io credentials, which I will use to further distribute malware into as many more packages as I can get my hands on.
Now what? What are you going to do about it?
The clock is ticking. Seriously. In a week or so, one of your dependencies is turning evil. You don’t know which one. You need to update it along with a bunch of others. Tick… tock… tick… tock…
*) I have no idea what’s going on exactly, the entire thing is a ludicrously complex black box as far as I’m concerned. Other people have reported the same issue, and it’s still open. Nobody from the dotnet core team has any clue what to do.
I’m really starting to like this thread, the tone has improved, I’ve gotten my breathing under control again, and there are lots of deep, insightful discussions
(I’m on mobile, so I’ll keep it brief-ish)
Utf-8 Vs utf-16: all of the systems that have chosen utf-16 has regretted this. It seemed like a good idea at the time, because it used to be a fixed-length encoding. Now that it isn’t anymore, utf-8 is better in all respects.
I admire the guts windows and java showed in adopting unicode early, but history has not been kind to their “first-mover” status.
I am glad that rust chose the future-proof path with utf-8, even though this means shimming and OsStr… and thanks to the good, generic From/Into shim traits infrastructure, that shimming will remain as simple as possible, within the limits of the existing, divergent platforms/environments
Trustworthy packaging is definitely a concern, but the infrastructure surrounding cargo is very future-proof.
For example, the entire crates.io index is a git-repository, meaning that it can be cryptographically verified. Signing this is actively being discussed (link to follow).
2-factor login for crate authors is possible if you chose to authenticate via GitHub, and there is an ongoing internals discussion on making 2FA required for publishing. (Inspired by the NPM eslint attack)
Crate signing would definitely be cool, and is only waiting for someone to put in the hours to implement it. Thanks is to the index being a git repo, the signing public key can even be distributed securely.
The crates team is strongly aware of their responsibility, and has been discussing alternatives like the cryptographically secure “TUF” since the get-go.
Their security-mindedness inspires a lot more confidence in me personally than the ad-hoc-ish track record of NPM (and with NPM community board member Ashley Dubs being a prominent part of the Rust team, you can bet that the NPM lessons are not lost on Rust)
Crate discoverability is indeed not yet optimal, and this is known in the community and in the team. See for example the current experiment with crates.rs (Todo:link)
Searching and ranking is being investigated, it since it’s mostly a social problem, not a technical one, we are talking “harder-than-NP-hard hard”…
With regard to the crate discoverability complaint, crates.rs (announced on this forum here) makes a good attempt at improving this in a highly performant way.
Adding a maintenance quality metric to the ranking algorithm would probably address the only complaint about the crate ecosystem that it does not already address. Something like checking that it compiles with latest stable compiler first. If it compiles then it is given a maintenance score of 1.0. If it does not compile with latest stable, score should be reduced according to date of last version update – 0.9 if the last version was a month before the last compiler update, for instance, but 0.1 if the last version was two years ago. Use this score as a multiplier for the current “popularity” index.
I know I can sound… harsh, but it’s “tough love”. I like the Rust project and I want to see it succeed, because C++ is long overdue to have real competition.
I think what a lot of open source projects are missing is someone loudly saying “No!”. Too many people patting each other on the back for a “job well done” when the job isn’t done leads to very happy people pleased to produce something that ultimately is a failure when faced with reality.
For an example of how open source can work, look at Linux. In my humble opinion, the single most important thing that makes Linux successful is Linus Torvalds. He’s brutally harsh in his feedback, infamously so. People constantly criticise his language, when he is simply stating facts. Often these are uncomfortable facts people don’t like to hear, but must hear to have any chance at success. Some bars are set very high, such as “entry into the kernel”, but people are sloppy and lazy. Linus points out that this is unacceptable. People don’t like it, but that’s not the point, because the computer doesn’t care about you feel about the quality of your code.
The rust-lang forum helpfully suggested some related topics, which shows that I’m certainly not the only one with such issues:
To give an example, on Saturday I tried to read and write from a TcpStream using a buffered reader and buffered writer. Using the examples in the docs this was impossible because once the writer was constructed the reader could no longer use the stream.
This kind of “difficult to assemble layers of stream processing” commentary ended up being the bulk of this topic…
And, a topic created by yourself:
I’m trying to learn Tokio and finding it too complicated. The problems for me are:
The sheer number of types involved.
Choice of names for the types is such that they don’t indicate the intent clearly.
Also at a coarser level, many things seem non-intuitive. For example, why is a ServerProto, which is meant to represent a protocol, creating a transport? There are many such design choices which are non-intuitive.
Also a very similar point.
I see a bunch of people are politely saying the same thing over and over.
Has it worked?
Rust is, what, 8 years old now, and the 1.0 release was 3 years ago! Yet it is still obscenely difficult to do really basic things with the language, like read and write a socket. Please excuse my language when I say: “What-the-f***!?”
The main problem with brilliant jerks such as Linux Torvalds is that they are not composable.
Put one of them in a room with a polite person, and they will quickly exhaust that person until she leaves.
Put two of them in a room, and hell breaks loose as they start yelling at each other over the most trivial matters.
Ultimately, having a brilliant jerk around greatly limits your ability to assemble significant amounts of manpower, which is the name of the game in software.
Complaining, whether politely or impolitely, is ultimately a very ineffective way of moving things forward in a software community. If you really want something done, the most effective way is almost always to go and do it yourself.
Time is too precious a resource to expend sizeable amounts of it in Internet arguments.
The Rust project specifically rejects the idea that you must be mean in order to give feedback. It’s not only a moral thing; it ends up distracting from the actual feedback, making it harder to sort out what’s vitriol and what can actually be done to improve things.
See this very thread. The extra arguing has made many, very offtopic comments, rather than actually figuring out what could be done to improve things. It took a while to get back into the topic.
Criticism: good. Being a jerk while giving it? Not good.
As Hadrien and Steve have already pointed out, we are trying emphatically to avoid having “loud” dictators in the Rust World.
Obnoxiousness alienates, and not even a truly brilliant, won’t be seen for another thousand-years genius, can outcompete a dozen people of “only normally smart” level, but who are working together at a normal, pleasant blood pressure level.
Tokio composability has been the singular focus of the hugely-breaking 0.2 rewrite, with gigantic usability improvements.
Another equally big leap forward is to be expected after impl Trait.
So obviously"people politely repeating requests" has done exactly what was required, with a pleasant experience for the implementors, who will stick around because of it, and still enjoy improving their work, rather than taking their ball and going home because their first API design (in a revolutionary, never-seen-before ownership-based environment, where no established wisdom exists) was “criticised” with slurs. (Edit: deleted uncalled for criticism of other communities by name, thank you for calling me on it @burntsushi)
There is a reason I am a rust-enthusiast, and the excellent examples of good behaviour (unfortunately not my own…), are a large part of it.
@juleskers I’m strongly in favor encouraging others to work together respectfully, but let’s do it on our own terms and avoid unstructured critique of communities/projects/individuals that work differently than us. Not only is it uncouth, but it’s just going to inspire more off topic debate instead of focusing on what’s important: in Rust spaces, we’re kind and respectful to each other.
I’m hopeful that further discussion in this thread can be productive. Enough folks have reminded @peter_bertok to be kinder with their words. Let’s try to avoid further meta discussion.
I’m just an outsider looking in, with no skin in the game. Feel free to ignore me. 8)
Pease allow me a chance to clarify the (very off-topic) point that I was trying to make, I don’t want to be seen as advocating for more rudeness, because obviously that’s not constructive…
My entirely unscientific observation over decades is that successful languages seem to be usually designed by surprisingly few key people, providing “a clear design direction”, at least initially. E.g.:
C - Dennis Ritchie
C++ - Bjarne Stroustrup
Python - Guido van Rossum
Perl - Larry Wall
Java - James Gosling
Ruby - Yukihiro Matsumoto
I agree with you that Linus-style leadership is probably not the best example to use, because he can definitely rub people the wrong way. So instead, consider the list above, because it’s more apt to Rust anyway. Are any of those people rude? Brash? I don’t think so, or at least not infamously so, like Linus. Still, I would argue that their clear leadership was required for the success of their languages.
Notably, most of those languages have changed over time, being taken over by cooperating committees or communities with no clear leadership. In particular, C and C++ have stagnated and wandered around aimlessly for over a decade because of the incompatible requirements of the various vendors all having their say.
It’s entirely possible that Rust can develop to be a successful, widely used language that isn’t relegated to some specialised niche. I’d love it, if this happened! I’m also watching the community-driven process with absolute fascination. Maybe it can work! It’s certainly educational to watch it all unfold in real time. To my knowledge, no mainstream language has ever started out community-driven and succeeded. Maybe Rust can pave the way…
A side-note on tokio improving: coincidentally, this was just posted today:
In particular, this bit is impressive:
For a long time, many users have asked for us to remove the Error type from Future, because they did had a non-IO use case in which the Error type was inappropriate. We’ve come around to agreeing with them; having the Error type would require that async functions always return a Result, and would not enable non-IO use cases for asynchronicity (such as lazily & concurrently evaluated pure computation).
So in this case at least the community process seems to have worked! The “pollution” of the Future trait with std::io::Error was one of my main issues with its design, and it seems that similar feedback by others with a similar opinion has had positive impact. 8)
Rust didn’t start out community-driven either - it was a single Mozilla researcher’s “pet” project. It’s important to keep in mind that when the languages you listed started out, it was hard to impossible to have a community; internet wasnt as ubiquitous as today, let alone enablers like Github. If they were then perhaps those languages would’ve evolved differently as well, possibly for the better.
I also don’t think Rust is as community driven as you make it sound. There is a core team, that while open to outside opinions and contributions, ultimately decides on changes and steers the ship.
Your input is clearly valued by the Rust community. What is needed, however, is to turn your views into actionable points that can be utilised to improve the direction of Rust. Prior to the 2018 edition release, this discussion is rather pertinent.
Please consider writing up pre-RFCs or whatever on a handful of the most important additions and modifications (actionable detailed items that can be executed) that would be needed to ensure that the dystopian future of incompatibility of Rust packages you described does not happen. That will serve to turn your knowledge into practical currency to further Rust constructively.
I disagree that a single individual/core team is necessary to drive a successful language, instead, a unified vision and mission-statement is required. In addition, the willpower needs to exist to realise that not everyone can be satisfied with one single language and that that, sometimes, calls for a certain feature/change should be rejected purely on the grounds that there are already viable alternatives that are better suited to some people’s needs - especially if this feature/change is not aligned with the chosen mission statement of Rust lang.
Focus on a common vision is imperative to ensure that a common focus is maintained in the core community since doing a few things extremely well is repeatedly shown to be better than doing many things averagely. The Rust roadmaps have shown to be a good solution to unifying the community, and could perhaps eventually morph into an overarching “Rust will always do x,y,z well, and anything that subtracts from that is not ok”.
Your points on side-channel attacks on Cargo is highly relevant to the perceived Rust vision of security and can be said to be a viable core focus of Rust. Why not develop the counter-measures further and lead the improvements?