How is Hardware Acceleration Implemented?

This post was flagged by the community and is temporarily hidden.

https://en.wikipedia.org/wiki/General-purpose_computing_on_graphics_processing_units

1 Like

That covers it in detail, much appreciated

Just in case there’s someone who’s knowledgeable enough to respond, I’m curious: can GPGPU be used to effectively apply Maching Learning methods on data? Or because of GPUs being designed with, first and foremost, graphics programming in mind, dealing for the most part with vectors and matrices, it isn’t that practical to consider?

Any thoughts or opinions would be most welcomed, as I’m fairly new to the topic.

And given that I happened to find these video series while researching the web after @jlgerber answer, I think it’s only fair I share it here, maybe someone will find it useful as well:

Just a heads up, your question is very general and off-topic!

GPGPU has been used for ML for more than a decade now and is considered as one of the corner stones of deep learning (DL) alongside massive data. Nowadays, there’s a big rivalry between major companies and startups for coming up with some efficient hardware accelerator ASIC designed for ML/DL use cases, for example Google’s TPU is one of them.