Using GPUs to Scale and Speed-up Deep Learning

Training a complex deep learning model with a very large dataset can take hours, days and occasionally weeks to train. So, what is the solution? Accelerated hardware. You can use accelerated hardware such as Google’s Tensor Processing Unit (TPU) or Nvidia GPU to accelerate your convolutional neural network computations time on the Cloud. These chips are specifically designed to support the training of neural networks, as well as the use of trained networks (inference). Accelerated hardware has recently been proven to significantly reduce training time.But the problem is that your data might be sensitive and you may not feel comfortable uploading it on a public cloud, preferring to analyze it on-premise.  In this case, you need to use an in-house system with GPU support. One solution is to use IBM’s Power Systems with Nvidia GPU and PowerAI. The PowerAI platform supports popular machine learning libraries and dependencies including Tensorflow, Caffe, Torch, and Theano.In this course, you'll understand what GPU-based accelerated hardware is and how it can benefit your deep learning scaling needs. You'll also deploy deep learning networks on GPU accelerated hardware for several problems, including the classification of images and videos.

Created by: IBM

Level: Intermediate

Find Out More
Share
Facebook
Twitter
Pinterest
Reddit
StumbleUpon
LinkedIn
Email

Milan Institute-Merced Online Courses

Back to Top

Log In

Contact Us

Upload An Image

Please select an image to upload
Note: must be in .png, .gif or .jpg format
OR
Provide URL where image can be downloaded
Note: must be in .png, .gif or .jpg format

By clicking this button,
you agree to the terms of use

By clicking "Create Alert" I agree to the Uloop Terms of Use.

Image not available.

Add a Photo

Please select a photo to upload
Note: must be in .png, .gif or .jpg format