It is widely known that Tensorflow, which Keras extensively uses to implement its logic, supports local GPU acceleration using Nvidia graphic cards via CUDA.
But what can users of other graphic cards, such as AMD or Intel HD Graphics, do in this situation?
Introducing PlaidML, a python library and tensor complier that enables the use of local infrastructure to speed up vector calculations on the machine.
To use it, Python 2 or 3 must be installed, as well as OpenCL 1.2 or greater.
In a conda virtual environment, the installation of PlaidML goes through pip:
pip install plaidml-keras plaidbench
Then, to configure the GPU/CPU driver to use, run the following command:
Since I have an AMD graphic card installed in addition to the default Intel HD Graphics card, the devices show up on the list.
After enabling experimental device support, choose your preferred device to use. In my case, I’ll obviously choose the device associated with AMD.
The next step is to set the device as the Keras backend of the notebook/script used to train/test our model.
Make sure the backend is set up before importing Keras.
We’ll test our device by comparing the training time of a model trained for 2 epochs on the famous MNIST dataset.
Without PlaidML GPU Acceleration, using the default CPU: 6 minutes and 50 seconds
With PlaidML GPU acceleration: 2 minutes and 1 second
Note: While the GPU acceleration speeds up tensor calculations on Keras models, it will not speed up independent tensor calculations using Numpy.