Heterogeneous data parallelization: Rust plans?

hello, one technical domain in computing science (algorithm conception and writting code) is becoming more and more important: parallel computing in heterogeneous hardware/software environment. With AI boom, its importance can't be overlooked

It appears that all main solutions developped to achieve this goal of efficiency across heterogeneous devices are only C++ friendly. Vulkan, Sycl, Cuda.

Given the fact that software designers tend to privilege coding language which is the most versatile, what is the plan elaborated by Rust foundation to not be downgraded compared to C++ ? Is there a Rust-Sycl or something else in pipe ?

In the race against more established language as C++, Rust is described as a very fast and safe horse, but is it along all the track ?

1 Like

There are obvious bindings to Cuda (I assume Vulkan and Sycl as well, but I am not sure).

Then there is rust-gpu with focus on graphics and compute shader.

Then there is wgpu again with focus on graphics and compute shader.

Regarding graphics and game development: A list of crates can be found here: https://arewegameyet.rs/

The same but for machine learning: https://www.arewelearningyet.com/

You can see for yourself that there is good progress but still a lot of things missing or to be improved.

The short version is that different computing environments are different enough that you generally need a different language to actually be efficient in them - or at least you need to program differently enough that you may as well be.

There's still good reason to want to lower the barriers between the different environments for CPU, GPU, browser, edge, server, embedded, and whatever else I'm forgetting; but Rust is, at least in theory, better situated for that than c++, given it has no_std, macros, and a consistent package system and build tooling.

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.