Rust is great! Very smooth when compiling

I found when I install ripgrep with cargo install ripgrep.
CPU is full. But I'm still able to do things smoothly. Like switching applications, coding etc.
Check out the screenshot.

I'm very curious why Rust can be like this, but other heavy languages will slow down system very obviously?

I think that you are not looking at the right system load indicator here. Linux is pretty decent at staying responsive under high CPU load. What kills it is high IO activity.

I routinely run my systems at 100% CPU activity without even noticing that something is happening in my interactive work. But as soon as a program starts reading from or writing to disk in a tight loop, everything becomes sluggish.

My understanding is that this is due to two things:

  • Linux is very bad at prioritizing latency-critical IO jobs with respect to other IO jobs or disk throughput considerations (if memory serves me well, most distros even stuck with the interactivity-hostile FIFO scheduling algorithm until very recently).
  • The design of Linux's IO interfaces, which uses laziness all over the place, makes it very hard for applications to predict when they will be blocking for IO and to take action to ensure that they will remain responsive when that happens.

Note that although I refer to Linux here because this is what I am most familiar with, I suspect that other operating systems face similar issues. This is an area where we pay the price for using systems whose design dates back from the era of large batch processing systems and has been insufficiently reworked for single-user performance considerations since then.

3 Likes

This is an area where we pay the price for using systems whose design dates back from the era of large batch processing systems and has been insufficiently reworked for single-user performance considerations since then.

Any idea where I could learn more about these systems, why they were designed the way they were, and how they could be redesigned to better fit single-user cases? This all sounds super interesting.

When you run a multi process task it will have a bigger impact. Using Linux cgroups can tame some of the greed.

My current reference book on operating systems is "Modern Operating Systems" by Andrew S. Tanenbaum.

The amount of areas of OS design which this book covers is just plain incredible, and I love that as it is the kind of low-level programming topics on which I can easily geek out for weeks and still be left wanting for more at the end. Moreover, the author is extremely good at explaining these complex topics to the point of making them sound easy. For all these reasons, I would strongly recommend this book to anyone wanting to learn more about what operating systems do under the hood.

As far as limitations go, my review is bound to be a little unfair since I have the 3rd edition from 2009, whose content is slowly getting old. There is a more recent 4th edition from 2014, which likely addresses some of my criticism. But essentially, the main thing which I miss in Tanenbaum's book is coverage of more obscure or recent research in this area, which I find equally interesting, but on which I cannot expend the effort of reading hundreds of poorly written research articles and would greatly appreciate a more readable long-form summary. Here are some examples:

  • Capability-based security (EROS/KeyKOS/CapROS, Midori from Microsoft Research...)
  • New implementation languages (Microsoft Research strikes again: Midori, Singularity...)
  • Finer-grained user-space privilege isolation (Genode, but also "mainstream" Android in a way...)
  • Real-time operating systems (a bit covered by Tanenbaum, I want a lot more of it)

Of those, the Midori developers in particular did the right thing, with project lead Joe Duffy writing up a very readable yet remarkably detailed summary of their research on his blog. I will never thank him enough for that.


It is one of my life dreams to go and build a purposely backwards-incompatible operating system, which is centered on (mostly) single-user computers that are used for the purpose of content creation (laptops, desktops...). By being backwards-incompatible, such an OS would be able to break away from past decisions which are now known to be plain wrong, and to eliminate all of the inadequate stupidity which modern operating systems have accumulated over time in this name of fitness for "shared server" use cases. I think there is so much to do in this area, when it comes to task scheduling, security models, reliability, UI and UX... it makes my head spin to imagine all the possibilities.

I actually gave it a try when I was a student, and then full-time jobs caught me from behind and forced me to give up on it as they do not leave enough weekly spare time for such a complex project (context-switching to and from such a big project has a massive mental cost, you want to keep working on it across long and frequent time bursts). Who knows, maybe one day I will manage to get revenge from life and be able to carry out research in this (IMO) under-developed area of computing again :slight_smile:

5 Likes

When program has large I/O, system did become slow. Don't know how, when I install packages from Arch Linux AUR (some packages need to be compiled.) Those packages are written as C/C++, they are really take lot of resource. I can feel the difference. Even though I can't tell what it is. And don't know how to benchmark it.

Anyway, I like Rust!

1 Like