Can Rust threads be run simultaneously on several processors computers?

I wonder if Rust threads can be run simultaneously on several-cores computer
Usually, so switch to a thread, we have to make another one(s) sleep or just lock them with Mutex, but is there a way to run threads actually simultaneously if I have a several-processors computer?

By threads if you are referring to "actual" threads that are spawned with thread::spawn, then yes. But it all depends on the OS. If your OS schedules threads of the same process to different processors (or processor cores), then yes. Otherwise no. Rust intrinsically has no control over how threads are handled.

Thanks you for the answer. And is there a way I can know the way my OS schedules the threads?

It depends on the OS. If it is an open-source OS (like Linux, BSD, ...), you can look at its documentation and even the source code. If it is not, it is a bit hard to tell. The system API can have some hints - because it will usually contain functions to spawn new threads (which are internally called by Rust), and they are usually well documented. You can also experiment around a bit - spawning several busy threads (like infinite loop), and looking at the CPU performance graphs - a core/processor executing a busy loop will be at 100% utilization.
That said, all the common OS's utilize multiple cores/processors to schedule threads. This includes Linux, OSX and Windows.

3 Likes

You can find some library call that makes a system call to find out which cpu a thread is running on. This is (for demo purposes only!) demonstrated on playground here: on varying runs, the threads run either both on the same CPU or differing ones. In this case surely on a virtual machine, but that doesn't make so much difference for how it works.

I'm thread ThreadId(1) on cpu Ok(0)
I'm thread ThreadId(2) on cpu Ok(1)

Note that a thread can migrate from one core to another at any time. The doc for the getcpu function here even underlines that the thread may already have migrated cpus by the time the function returns.

2 Likes

You are confusing two different things here.

The very point of (real, OS-level) threads is that they can and do actually run in parallel, physically at the same time.

Locking is not required unconditionally, "just because". That would make no sense. Locking is required when multiple threads want to access the same shared mutable resource. As long as threads don't access shared objects or they only read them, there is no need for locking, and threads can safely run in parallel.

4 Likes

I don't think that it is necessary that they do actually run in parallel. Linux will allow you to spin up many threads on a single-core single-processor machine. But at any time, exactly one is active and running. In such machines, having threads allow you to have things like an event-loop, which processes events in a a separate thread from the rest of the process (among other things, of course).

That's explicitly not my point.

Of course, you can create single-threaded, serial concurrency using threads. But there are many, much better, lighter-weight, easier to understand alternatives for that (e.g. async).

OS-level threads are primarily useful for achieving real parallelism, something that other concurrency constructs are simply not capable of doing.

I have a rather different perspective on the motivation for threads. For most of my career I have use operating systems that support threads on single core machines. To this day a lot of threaded applications run on single core machines. Threads/processes were first developed on single core machines, like Unix.

So what is the motivation for threads?

To me it comes down to the fact that people want to write multiple instances of "sub tasks" that basically look like this pseudo code:

do forever
    some_work
done

They want the effect that all those loops are running at the same time, even on one core.

If you write that in most programming languages you get something that hangs up on the first of such loops it encounters, the others never get to run.

Preemptive threads allow all of the loops to proceed by multiplexing the processor between threads that contain those loops. Tackling the problem without threads requires merging all the loops into one huge loop and implementing state machines that move their progresses along. Which makes for messy, unmodularized, hard to maintain code. Using threads allows one to split the various tasks up neatly.

Alternatively we used cooperative schedulers, where these loops had to contain some call to an OS function that could swap threads around, perhaps an I/O function, perhaps just "suspend()" or "yield()"

Now a days we have "async". Kind of like the cooperative scheduling model. The difference, as far as I can tell is that the scheduling is now now built into the programming language, with features lie "async" and "await", rather than being entirely done by some operating system magic. I presume this is a rather new phenomena as creating the compilers is much more complex.

1 Like

The motivation might have been concurrency once. However, nowadays one wouldn't typically use OS threads for single-core concurrency as the default choice.

If you are writing C, you probably would. In Rust, definitely no - async works way more ergonomically.

I think that is in a large part true.

However there are still billions of embedded systems running on single core processors on top of multi-threading operating systems. There is a list of such OS here: What Are the Most Popular Real-Time Operating Systems?

Most such systems are written in C or C++ where there is no choice. No async available.

OK, and then? None of this contradicts my assertion that one doesn't automatically require locking as soon as one works with threads. I'm not trying to rewrite history. I'm trying to clear up OP's confusion regarding the technical details of the usage of threads.

1 Like

I agree. Atomics, locks, mutex's, whatever, are only needed when threads share data.

1 Like

I'll try to provide a different perspective.

Threads is a programming model (hardware independent), where programmers are empowered to construct execution flows that overlap in time. Synchronization mechanisms are introduced for constructing correct programs.

Why using this model? On hardwares that support parallelism, such programs may get boosts in performance.

In practice, the concept of thread in almost all modern languages is designed to leverage your hardware's parallelism. So the answer to OP's question is definitely yes.

1 Like

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.