Negative views on Rust ( not mine! )

Again: not true. Behold:

Python 3.7.3 (default, Jan 22 2021, 20:04:44) 
[GCC 8.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> a = 2
>>> b = 2
>>> a is b
True
>>> a = 12345678901234567890
>>> b = 12345678901234567890
>>> a is b
False

Difference doesn't matter for people who don't care about bugs in the programs they are writing and who just want them to run. The fact that program which worked on big 64bit box may stop working in a tiny MicroPython wouldn't bother them one jot: yes, it no longer works, but so what? We just need more tests!

But, then… if you just want to run the program and don't want to make sure it's correct — again: why would you use Rust in the first place? There are lots of other languages which you may use! Many are much simpler than Rust (as long as you want to run the program, not make sure it's correct, that is).

6 Likes

OK. It makes some observable difference if you dig deep enough because there are ways to check.

My real point was that it doesn't matter to programmers, even if there is some way to detect these implementation details. Python doesn't even specify what the is operator will return in the above cases.

For immutable types, which includes int and str in Python, and int, Integer, String in Java, it doesn't really matter whether you have a local variable on the stack or indirection because it behaves same way as indirection, unless there is some way to access the variables after they go out of scope.

In C++ and in Rust it matters because variables on the stack end their lifetimes when the stack frame disappears, and you may still try to hold references to these variables, while in Java and in Python you can't possibly be trying to access these local variables after they go out of scope because there is no mechanism to do so. You can copy the local variables and keep the copy after the scope runs out, but that's not accessing the originals, so it doesn't matter whether the original stack frame has disappeared, or still lives somewhere on the heap.

Essentially, Python, Java, Javascript don't have the & operator for local variables, which is really what makes stack lifetimes suddenly important. That's why the stack may be hard to understand for people coming from these languages.

2 Likes

You could just as well say the same for Rust, then. The compiler could silently box all your locals and transparently deref them. (LLVM would probably even usually remove the allocations as an optimizations if it did, though.)

This is an unhelpful distinction, IMHO. It's like a discussion is whether something is "high level" -- something that's just about defining terms, not something that increases the enlightenment of the reader.

6 Likes

That was a big one for me, too! I used Go for larger projects before I started using Rust, and it felt "meh" at the time. Error handling wasn't the best, but Go had a lot of pros that I couldn't ignore. Then I tried Rust, which had all these pros + fixed the error handling issue + other pros. I couldn't bear to see large Go codebases again.

2 Likes

Except without stack you couldn't adequately track the resources thus you get try-with-resources in Java, with in Python, and defer in Go.

It's not that I don't believe in the ability of someone to create a language which includes both lifetimes/ownership/borrow system and GC/try-with-resources/with/defer system. Stranger things were made.

The problem here lies in different place: if you do that then the result would be hated by rustaceans and GC lovers both!

It's not a coincidence that existing Gc-related crates are used almost exclusively when you have to make an extension for the GC-based language. There the need to deal with the GC is essential, not accidential: you are dealing with GC-infested language, you have to support GC somehow, end of story. And in that case, at least, it's clear who wins: rustaceans have to deal with both complexities of Gc and complexities of ownership/borrow system, but at least target users can continue to pretend they are living in a world where stack doesn't exist and memory is limitless (just don't forget to buy 10x as much of it as you'll need for carefully programmed program).

But if you add GC to Rust itself… Rustaceans would need to learn new syntax forms which would solve problems that were not a problems before introduction of GC and GC-lovers would still need to learn about stack/lifetimes/ownership/borrow system!

So who and what would win?

32 posts were split to a new topic: Negative views on Rust: panicking

That first approach is also how Rust may be doing it; it just so happens that the compiler / linker may be laying out all these strings together. In C:

static char const * const msgs[] = {
    [ERR1] = "long message for err1",
    [ERR2] = "but message for err2 can be longer",
    [ERR3] = "or message for err3 can be short"
};

is the same as (modulo string ordering):

static char const msg1[] = "long message for err1" /* + \0 */;
static char const msg2[] = "but message for err2 can be longer";

static char const * const msgs[] = {
    [ERR1] = &msg1,
    [ERR2] = &msg2,
    …
};

That means that msg1 and msg2 (and so on) are perfectly allowed to to be laid out contiguously in static / global memory:

// by the linker:
static uint8_t const GLOBAL_RO_MEMORY[] = {
    …

    'l', 'o', 'n', 'g', ' ', 'm', 'e', 's', 's', 'a', 'g', 'e',
    ' ', 'f', 'o', 'r', ' ', 'e', 'r', 'r', '1', '\0',

    'b', 'u', 't', ' ', 'm', 'e', 's', 's', 'a', 'g', 'e', ' ',
    'f', 'o', 'r', ' ', 'e', 'r', 'r', '2', ' ', 'c', 'a', 'n', ' ',
    'b', 'e', ' ', 'l', 'o', 'n', 'g', 'e', 'r', '\0',

    …
};

And it turns out Rust may do the same thing, but with the big difference that Rust's strings are not null-terminated, so a C-String "scanning" that would start at long message… would simply not stop until reaching a null byte arbitrarily far away in memory, hence "exposing" other officially-unrelated-even-if-laid-out-contiguously-by-the-linker read-only memory, such as the other language strings.

To further prove my point:

6 Likes

If you are writing simple single threaded apps, I might agree somewhat, but if you are manipulating collections in a multithreaded environment, I'd argue Rust's additional guarantees are worth it. Just "fixing bugs as they appear" can be a giant pain in the behind when it's a multithreaded heisenbug.

6 Likes

Actually, the original definition of systems programming wasn't concerned with those details. They were just a side-effect of the hardware limitations of the time. That's why Go satisfies the original definition of a systems language in the Internet age.

According to the people who coined the term, systems programming is about building infrastructural components which need to have long lifecycles in which they will be maintained by shifting teams of people.

It's defined by the need to manage complexity.

2 Likes

There is a common misconception that "system programming" is all about embedded system programming, micro-controllers and all that.

As ssokolow linked article points out the definition of the phrase has been blurred for a long time.

I think a lot of it comes from the good old days when there was software created those that worked on the OS and utilities of mainframe operating systems. That being *system software". And then there was all the other users of the machines whose software was not *system software".

It's a pretty meaningless term today. Except it has implications to me of requiring performance, correctness, reliability. That one can write compilers and interpreters etc, ect in a "system programming" language. That one can access hardware. And so on.

Of course Rust is very useful outside of so called "systems programming".

1 Like

Under this definition, I would argue that basically every piece of software where there is a hope that it will be still used a year after its release will, if done well, involve some systems programming.

This would include every software startup, (even though most will fail, they all have ambitions to last longer), and every open source project that hopes to acquire a meaningful user base.

The above observations suggest to me that Rust will likely be successful in those areas proportionally to how suitable it is for that definition of systems programming.

1 Like

That depends. It's always a trade-off and there are plenty of people who feel that languages like Go or C# are better balance points.

It's still pretty meaningful. System software is software which have to work.

Let me show the difference using two systems (not operation systems, but just two systems). Good old Turbo Pascal and ChromeOS. One is very old application software written in system language (because there were no other suitable languages back then) one is modern OS written in non-system language (JavaScript, even if C++ is also used for certain components).

What happens when you open too many files in Turbo Pascal? You get nice message about lack of memory and it's up to you to decide what file would you close. If any. Nice, consistent, pleasant experience.

What happens when you open too many tabs in ChromeOS? The whole thing freezes, you may try to open task manager, if you are really lucky you may manage to save your work… but there are no guarantees. Every time nothing awful happened you may pat yourself on the back and pray to your gods.

That is difference between system language and application development language. It doesn't mean you would ensure that every component would use not more than XKB and would take not more than Yns. But it does mean that such calculations are possible. If language couldn't offer any guarantees then it's not systems language (note how Go is not system language but “cloud infrastructure language” according to Pike).

If your users are Ok with losing their work from time to time then it still may be cheaper and easier.

1 Like

That is what I said when I was going on about requiring performance, correctness, reliability.

As the saying goes, it turtles all the way down. As far a web developer crafting a web site in PHP is concerned all that stuff he relies on, all those services and micro-services, written in Java, Go, Javascript, etc, etc are "system". They were programmed by "systems programmers" to provide the "system" he relies on.

Lower down a Javascript developer might regard all that C++ his JS interpreter is written in as "system" programmed in a "systems programming" language.

Before one can define "systems programming language" one has to define "system". Well, we have systems at all levels of the stack. And hence the term is banded around and means something different to every one who uses.

Personally, I think that if it does not compile to actual executable instructions it can never be a systems programming language.

On the other hand I regard Rust as much more than a systems programming language. I write our applications in it. I use like JS under node.js.

That's where “cloud” muddles the water: you can't trust the hardware when you program in the cloud which means you don't need and don't want classic “system languages”. Everything can crash and burn at any time and you just build a system which copes with that. There are no need for “system languages” in that pictures.

And as Matsakis noted: that doesn't work on the client. Yes, hardware may fail even on client, but that's rarely-happening disaster which you don't need to mitigate. But on the flip side you couldn't rely on any mitigation procedures either: if your system doesn't work then it becomes immediately visible, there are no one who may save you hide and more user's request to another server. With embedded situation is even worse: if ChromeOS crashes then end-user may wait till it would restart and redo the work, he would be angry, most likely, but it wouldn't be a disaster. If embedded OS crashes then there are often noone who may save the device from crashing physically.

That's how Rib Pike arrived with the idea of “system language without GC” (oxymoron if you'll think about it) and then it became “a cloud infrastructure language”. Simply because cloud doesn't need a system language by it's very nature.

Yeah. And that's both blessing and the curse.

Rust is nice enough that it's really nice to use it in many other cases, not related to system software development. But then you arrive at the point where all the design choices which were initially done to make system software programming possible stop making sense and people beg to change these.

IDK what to do about that really: I write both system software in Rust and also various things like web servers, but because I need Rust for system software I'm happy not to have GC in it when I write a web app.

But what should we say to people who want to write a web app and try to use Rust and then start demanding GC? All these strict guarantees which are important when Rust is used to write code that have to work are not interesting for them — but they have to suffer because of these requirements anyway!

1 Like

I have to disagree there. We have applications running in the cloud. They collect data from various places, they massage it in various ways, they distribute it to browsers over websockets, to databases, to other client systems.

We would very much like that this does not fail. To that end our cloud services:

  1. Are written in Rust so that we don't have to worry so much about our own silly mistakes.

  2. Use the NATS messaging system, written in Go, to provide efficient, reliable communications between parts. With multiple redundant systems to ensure continuous service.

  3. Use the Cockroach database, also written in Go I believe, again to provide efficient reliable storage. With multiple redundant systems to ensure continuous service.

  4. Underneath all that of course is the Linux kernel and OS we are running on, and under that no doubt machine virtualisation.

In short, our cloud efforts demand a huge lot of "system" written in a "system programming language" (We can argue about Go, but really all that could be Rust as well with some useful gains).

If engineering fault tolerant systems, with multiple-redundancy, clustered data storage, etc is not systems programming I'm at a loss to know what is.

Tell them to learn to program. Well, at least approach the idea of programming from a different angle. I'm sure that if they could be convinced to make a little jump in outlook they would be very happy to far more efficient, responsive, and reliable web apps.

Cloud programming. It's almost the exact polar opposite of systems programming.

And they are built on top of hardware which is not reliable and fail constantly. And they use programs which may crash at any time. And all that is tunerd to redo the work if failure happen. There are multiple levels of mitigations but you don't try to avoid failures, you mitigate them. Simily because avoiding failures is impossible: hardware itself fails constantly.

Very-very different world from the normal system programming where you try to avoid problems and not to develop ways to paper over crashes after they have already happened.

It's possible to write software for the cloud in Rust (and most cloud designs use linux kernel which is extremely robust and not some fancy kernels written in C#). But all that is optional. When you have said “every node can stop responding at any time simple because PSU have decided to die” you have entered entirely different realm where the need for the software to work reliably is much less important: sure, something your code (written in Go or Java) may start to consume memory like a crazy and then die… it's not a big deal if it doesn't do that too often: these same mitigation measures which are designed to fight dead PSUs would work for OOM, too. Deadlocks? No problem: just ensure that process is killed when they happen. And so on.

How? Their managers measure their efficiency on the basis of time they need to close issues in Jira. The quicker they can close them the more bonus they may earn.

Rust makes it rather difficult and the fact that there may be less Jira issues if you use Rust is rather hard to prove (even if it's true).

1 Like

That is exactly what I am disagreeing with.

I find cloud and embedded systems to have very similar requirements. For both of them:

  1. They are required to run forever.

  2. They are required to run correctly.

  3. The run on unreliable hardware. All hardware is.

  4. When you talk to people concerned with keeping customers on web pages there are even strict real-time limits involved.

I spent many years working on safety critical avionics system. Guess what:

  1. The hardware was assumed to be unreliable. Hence the use of multiple redundant systems and multiple communication channels between them.

  2. Software was also assumed to fail at times. Hence the use of hardware watch dogs.

To my mind the 4 computers and 16 processors of the multiple-redundant system of the Boeing 777 Primary Flight Computers (The ones that move all the control surfaces and react to pilot and auto pilot input) don't look much different from the cloud services built and run by our little company or the huge systems of the likes of Google.

A PFC processor, or power supply or connectivity can fail at any time. The aircraft has to continue flying. Same like typical cloud services.

I will happily call all of that "systems programming". The requirements and the solutions have a lot in common. Including choice of programming language.

"Cloud" is not all about some nerd knocking up a web page in PHP to run on a single instance on Digital Ocean or whatever and not being much worried if it goes offline or gets hacked occasionally. There is much more going on in Cloud systems.

Nope. Google doesn't even measure maximum latency. Only percentiles. 90% percentile, 95% percentile, 99% percentile…

Well… this may sound similar to what happens in cloud. And if true makes that particular subset of embedded software similar to cloud development.

But the majority of the hardware and software is total opposite: hardware is considered infallible and software must ensure it's infallible, too. One example: recently systemd got new option HandlePowerKeyLongPress, because, you know, arranging hardware support for that “4 sec = reboot” behavior is too costly for phone makers, TV makers, smart lamp makers. They want to remove that thing (and they removed the ability to disconnect power and reset button long ago so if kernel will misbehave there are literally no ways to reboot this thing… you would need to visit service center).

Does it look even remotely similar to cloud to you?

I got your point. Maybe. But that one is very much an exception. And Boeing spends tremendous sums to eliminate it and make it like all other embedded systems: cheap == right.

What I gather from your definition of "systems programming" is that:

  1. The software runs on a single core processor. Say a micro-controller.
  2. Said hardware is 100% reliable.
  3. The software has to be 100% correct and reliable.
  4. There are hard real-time requirements. Likely down in the milliseconds/microseconds.
  5. You have total control of all the software running on the machine.

I have to say that while that is one end of the spectrum and there are thousands of embedded systems engineers working on such things, I don't think that is what most people think "system programming" is.

Seems we are arguing about a matter of degree. The degree to which software has these properties and requirements or not.

I don't think it worth arguing back and forth any further about it.