Best way to start programming for GPU in Rust?

I’ve done some research over the web, but the only times people have seem to discuss the topic boil down to a blog post (this one) and a couple of reddit threads (this and this) - there are several libraries, therefore, to choose from currently, but most of them are to be considered too “under-developed”?

I have quite a lot of computations to perform, and just forcing the CPU to run at 100% via Rayon seems to be somewhat sup-optimal, considering I have an Nvidia GeForce GTX 1050ti graphics card built in my laptop. Can you recommend any fast and efficient (Rusty?) way to begin involving the GPU alongside the CPU for mathematical computations with lots of data to process?

Any libraries in particular?

1 Like

If you find the current libraries lacking in some way, you can use a more general solution like a compute shader, while not particularly “rusty” per say. For example a dx11/10 compute shader which can be found in the winapi crate. I’m not quite familiar with OpenGL but I believe that there is an equivalent there too, which probably has some rusty bindings available.

Edit: You may also want to consider the magnificent vulkano crate which is an excellent binding for Vulkan.

Emu seems to be an interesting library to do GPU computations and data processing.

2 Likes

The fact they are “magnificent” and “excellent” really makes me want to dive in.

Forgive me if I’m a bit pedantic here, but what makes them so good, in your opinion? Any reason in particular it is to be preferred to the ArrayFire bindings? WinApi calls seem inherently unsafe, so in terms of “Rustiness” I’ve got to assume it’s definitely not a “competitor”.

Emu seems okay, but also somewhat … small at the same time. I mean, from a bunch of articles I read regarding GPU programming there are quite a lot of variables to take into consideration, and reducing it all to a bunch of arithmetic operations seems to be an overkill. Of course, how the hell should I know if I’ve never added two numbers together via GPU…

From an intuitive perspective, the top-down list in terms of safety, stability and completeness would be:

  1. ArrayFire
  2. Vulkano
  3. Emu
  4. WinApi calls

Do you think Vulkano is superior?

Once again, forgive my impulse here, I’d just want to make sure I’m diving in the right waters here, as this GPU thing, with all of its advantages that it might have, is probably going to take a while for me to figure out, and swimming in the wrong part of the “lake” is probably going to make it even harder.

Or, that just might be a feeling of mine, which wouldn’t make any sense for anyone who’s actually been programming graphic cards for a while. Do tell me if I’m wrong here.

2 Likes

A while back, I was writing deep learning kernels for Nvidia GTX 1080 Ti.

In the end, my favorite solution was:

  • write raw kernels in CUDA
  • use Rustacuda for Rust<->Cuda glue.

:sweat_smile: Sorry, I think the words I used were a bit misleading. I meant that Vulkano has quite a bit of “push” per say and I’ve heard lots of good things about it. I’m only making an assumption it’s really good just from looking at the docs because they’re very well written and there are periodic posts on here about Vulkano.

I suggested winapi calls because that’s what I used to use, and what I’m most familiar with, so it’s just what I can verify will work, and work well.

Don’t be discouraged by the current library standings for gpgpu, a little can go a long way with compute shaders/gpgpu.

I am personally not as enticed by CUDA because of the suite and other things required by it, but there’s no reason to not give it a go.

There is also compute shaders with wgpu-rs.

Awesome, many thanks for the advice, I think I know where to start now.

I’ve done it using OpenGL and glium in this project https://github.com/thezoq2/locksort