Skip to main content

Learning AI the Hard Way: Turning your kid's gaming rig into a deep learning workstation

·3 mins

(originally posted on LinkedIn)

There are a lot of easy ways to get started coding in AI and ML. Most of these are in the cloud so you can start coding right away. Best of all, there’s support waiting for you when things go wrong. Some of these cloud-based, subscription, ready to go products include:

  • Google Colab
  • Amazon SageMaker Studio
  • Azure Notebooks

If you want a no-hassle way to get started on writing code, that is the way.

However, picture this… It is Wednesday evening and you are trying to keep a consistent cadence to your goofy, “I publish some drivel about AI on LinkedIn every Thursday no matter what,” nonsense. Just then you notice your kid’s gaming rig loudly humming along in the corner. You think, “Anyone should easily be able to turn their kid’s gaming rig into a deep learning workstation without breaking anything, right?”

My plan was to disconnect the internal drive, hook up a fresh, blank one, install linux, get the gpu working, and then train a hello world sort of neural network. When I’m done, just plug the original drive in and it is back to playing Mon Bazou or whatever. How hard can it be? I built a dedicated workstation for myself using a similar gpu last year. It was kind of brutal (for reasons I’ll get to later) but I did it. Plus, it was a year ago, things have to be easier now, right? Shouldn’t take more than an hour….

Yeah, right. Getting a fresh ubuntu install is a breeze but it took me about 6 fresh re-installs to get it to properly recognize the GPU. Mostly because of versions and bugs. Same goes for installing CUDA (cuda is nvidia’s (nvidia builds graphics cards) platform that lets software, like tensorflow, use the video card for general purpose computing) and then there’s an nvidia library called cuDNN you need for deep learning which has to be compatible with cuda and THEN there’s TensorFlow or PyTorch or Keras which is what you actually code against and everything has to play well together.

I mean, it is possible and you can do it (I did manage to get it all working… maybe I’ll use it for next week’s garbage) but is it worth it? No, not worth it. At one point I had to manually edit a file called /usr/lib/python3/dist-packages/UbuntuDrivers/detect.py because of a bug with… of all things… detecting versions of existing packages. Awesome.

If you do want to go down the route of making your own ML workstation, Docker makes it much easier. You still have to get the drivers working correctly for your graphics card, which is a bit of a pain, but that’s it. There are pre-built docker images out there that have the full ML stack (cuda, tensorflow, etc.) installed and working. You just add a –gpus to your docker run command and it “just works.” Something like:

sudo docker run –rm –gpus all nvidia/cuda:11.0-base nvidia-smi

That’s how I wound up getting things set up last year because it was way easier. I guess it still is.

But, easiest of all is Google Colab and I’m assuming the similar offerings from AWS and Azure. Until you wind up doing something that costs a ton of money on the cloud, there’s really no reason to roll your own, even if you have a perfectly good GPU just sitting there, chilling in the basement.

Ok, i gotta go plug in the original hard drive an see if it still plays games before my kid gets home from camp. Until next week.