GPUs are really cool devices.
GPU interface libraries have come a long way since the days of old, and general purpose GPU’s have become extremely common in the world of data science and machine learning specifically because of the capabilities of a GPU to compute huge amounts of complex math, like an algorithm that can separate signal from noisy input data.
When I first became interested to the concept of machine learning and forecasting, the very first algorithm that I taught myself was the artificial neural network. artificial neural networks are amazingly simple tools to describe, but quite challenging to implement in practice.
an artificial neural network defines a structure that takes your projects data as input, subjects the data to a vast series of internal objects, and spits out your selected outputs. Initially the output values would be close to meaningless as your initial weights would be randomly generated, but the magic happens when you pipe the output’s to a learning algorithm that is able to score how well a set of weights did for that interval, which allows you to select better weights which brings your output closer to what you want.
an example of a scoring function is below:
The main problem with neural networks and their associated learning algorithms is that they require a significant amount of math calculations for even the most rudimentary models, and much more so for complex natural systems. GPUs are perfect devices to run the complex math of an machine learning network on, as network algorithms are inherently parallelizable; they do not require information on neighbouring sets of weights to correctly compute their outputs and store state.
An example of my artificial neural network GPU based algorithm can be seen here, it utilizes the CUDA run-time library and follows a command queue pattern utilizing an attached orders.json file that tells the program the structure of the artificial neural network.