Best way to parallelize software rendering methods?

Hello! I'm working on a library called Aftershock, which is aiming to be an easy to use software rendering library, something akin to the PICO-8 in terms of usability. It is an immediate mode renderer and has some nice primitive functions so far. You can find it here: GitHub - Phobos001/aftershock: Software Rendered Graphics API and useful game utilities.

I'm having trouble thinking of ways that I can make use of multiple processing cores while drawing. There are three components to drawing right now: The Rasterizer (Settings and drawing functions), the framebuffer (width, height, bytes), and Images (width, height, bytes). Both are dead simple, only differing in what kinds of methods can be run on them, and how they can be initialized.

I tried using Rayon to draw multiple rows of an image at the same time, but the work Rayon was doing was always more than just drawing them image itself.

I also tried to make a static mutable Rasterizer, even if it was unsafe to do so, so multiple threads could operate on the framebuffer at a time. But I couldn't wrangle anything workable out of it.

Does anyone have any experience with this? What are some ways that I can pull this off? I'd like to take advantage of Rust's multitheading paradigm somehow, since it's so powerful.

If you dive into the code, you'll only need to peek at rasterizer.rs, and maybe image.rs (or assets.rs if I didn't push the latest commit yet x_x)

Thank you all so much! I hope you have a wonderful day :slight_smile:

Sometimes tiled rendering is used, where the image buffer is made of multiple rectangular regions and rendering/editing in each is done in parallel. This is generally pretty good except there’s some slowness for convolutions where they cross tile boundaries.

1 Like

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.