Home NVIDIA Training CoursesData Parallelism: How to Train Deep Learning Models on Multiple GPUs

Data Parallelism: How to Train Deep Learning Models on Multiple GPUs

Guaranteed to Run
Price
$500.00
Duration
1 Day
Delivery Methods
Virtual Instructor Led Private Group
Delivery
Virtual
EST
Description
Objectives
Prerequisites
Content
Course Description

This workshop teaches you techniques for data-parallel deep learning training on multiple GPUs to shorten the training time required for data-intensive applications. Working with deep learning tools, frameworks, and workflows to perform neural network training, you’ll learn how to decrease model training time by distributing data to multiple GPUs, while retaining the accuracy of training on a single GPU.

Course Objectives
  • Understand how data parallel deep learning training is performed using multiple GPUs
  • Achieve maximum throughput when training, for the best use of multiple GPUs
  • Distribute training to multiple GPUs using Pytorch
  • Distributed Data Parallel
  • Understand and utilize algorithmic considerations specific to multi-GPU training performance and accuracy
Who Should Attend?

Experienced Python Developers

Course Prerequisites

Experience with deep learning training using Python

Course Content
Module 1: Introduction
Module 2: Stochastic Gradient Descent and Batch Size
Module 3: Training on Multiple GPUs with PyTorch DDP
Module 4: Maintaining Accuracy at Scale
Module 5: Workshop Assessment
Module 6: Final Review
Do You Need Help? Please Fill Out The Form Below
First Name*
Last Name*
Business Email*
Phone Number*
What do you need assistance with?*
Best way to contact me*
How can we help you?*