This is not really a tutorial but I've been learning rust and as a first "get my hands" dirty into rust I've created a game of life application that does the logic on the GPU. I documented my experience somewhat in a blog post which talks about how to approach a problem like this.
I'm mainly a C++ programmer so any criticism on code/statements I make is welcome, so I can correct them :).
No criticisms, or issues, that I can see (as it's a tutorial, not a "this is a state of the art" claim). Probably very helpful to those who haven't used OpenGL with Rust before (also because an alarming number of people don't know about glfw!).
The only question I have is why not a compute shader? You've got it in your image (as a solitary node in the flow), and it's how I'd approach it. If you're not familiar, Anton Gerdelan did a quick article on them. I'm not sure if they'd give you any real performance gain, but with a compute shader doing the work you could offload it to another thread (shared data across contexts (link is to the overview of contexts which is easier to read than the spec)) you could update constantly, copying to another texture, and do all sorts of fun/strange/experimental things. (Of course you can single-thread it as in Anton Gerdelan's tutorial.)
I guess, in a real sense, it's more an idea for a follow-up post because the topics are a little more advanced (though honestly not really, as the post does skip the pain/suffering associated with explaining the bindings). If you haven't had a play with Rust's threading, or mpsc, there's a good excuse for you
Just because I've never actually used a compute shader before. I've worked with D3D11 and vulkan but never really done any "computing" on the GPU. I know they existed but wanted to keep my focus more on the rust language.
You are definitely right that a compute shader would probably have been better in this case. Might give it a try to create a follow up on it.
Currently I do have the benefit of already having my color texture ready for drawing but I suppose a compute shader would basically do write the values out to a texture anyways?
I had kind of guessed that it was either not knowing/not using/old GPU (as it requires OpenGL 4.2), but the really fun part is, as I said, threading it. Shared contexts with compute shaders is a great deal of fun.
You can use a compute shader to write colours (if you want), or you can generate raw values into it and then in the render pass (which has a requisite vertex and fragment shader anyway) you can calculate and shader as required.
Probably best to start non-threaded in the sense that the threading part is probably a little more irritating than it's worth when getting into compute shaders. With that model I don't image any real speed increase, but if you move to threaded you should get a reduced frame time (with your own "tick" logic running at whatever speed the GPU can handle for that queue, or at whatever rate you limit it to).
Ohh, now you are really hyping this up for me. Hopefully I can get around checking compute shaders out this week. Kinda depends on how busy I am with work.
Hyping it would be pointing out something like, oh, off the top of my head, that with two textures (either fragment or compute based) you could store the "generation age" of the entity and add a "death after X-cycles" style response to your system. Or just track age by incrementing up ABGR (or even across RGBA, or fading, using compute to decrement from white and just greyscale it)... or that you can track population data by using SSBOs instead of textures.
The Rust-y part of this comes from how much nicer the default threading setup is compared to C++ (mpsc is super helpful here).
In terms of really exploring Rust, instead of OpenGL, you're looking at how well Rust can offload the data. You're still hitting the same performance penalties when fiddling with the data, but the compute cycle can copy to a texture, another compute shader can download at any time (depending on memory barriers and card--due to queue counts), and you and do some completely insane things (like allocating 128MB of card space on a non-texture and storing identity information for your blobs, something that you'd previously only have maybe 128-bits of store space to do in a texture, and even that was horrid---though you could have used the vertex array hack, but ew).
I've had very little time to play with compute shaders in the last couple of years (mostly due to the relatively high hardware requirement for them), but it's on my todo list for Rust interoperation solely due to how well the threading models has held up to my other experiments(/abuse).
I've never did much threading in C++, always worked on engines where most of the code didn't really ask for it. Although I feel like rust will be a better playground for me to learn it as it's more strict.
I've made a follow up about my journey. I've worked on the game of life and converted it to use compute shaders. It's not very interesting rust language wise but if anyone is still interested you can find it here.