Hyping it would be pointing out something like, oh, off the top of my head, that with two textures (either fragment or compute based) you could store the “generation age” of the entity and add a “death after X-cycles” style response to your system. Or just track age by incrementing up ABGR (or even across RGBA, or fading, using compute to decrement from white and just greyscale it)… or that you can track population data by using SSBOs instead of textures.
The Rust-y part of this comes from how much nicer the default threading setup is compared to C++ (mpsc is super helpful here).
In terms of really exploring Rust, instead of OpenGL, you’re looking at how well Rust can offload the data. You’re still hitting the same performance penalties when fiddling with the data, but the compute cycle can copy to a texture, another compute shader can download at any time (depending on memory barriers and card–due to queue counts), and you and do some completely insane things (like allocating 128MB of card space on a non-texture and storing identity information for your blobs, something that you’d previously only have maybe 128-bits of store space to do in a texture, and even that was horrid—though you could have used the vertex array hack, but ew).
I’ve had very little time to play with compute shaders in the last couple of years (mostly due to the relatively high hardware requirement for them), but it’s on my todo list for Rust interoperation solely due to how well the threading models has held up to my other experiments(/abuse).