Join your friends at AAICE for this exciting follow-up on NVIDIA Fundamentals of Deep Learning for Multi-GPUs! 

In this workshop, you’ll learn how deep learning works through hands-on exercises in computer vision and natural language processing. You’ll train deep learning models from scratch, learning tools and tricks to achieve highly accurate results. You’ll also learn to leverage freely available, state-of-the-art pre-trained models to save time and get your deep learning application up and running quickly.

Learning Objectives

At the conclusion of the workshop, you’ll have an understanding of:

  • Stochastic gradient descent (SGD), a crucial tool in parallelized training
  • Batch size and its effect on training time and accuracy
  • Transforming a single-GPU implementation to a Horovod multi-GPU implementation
  • Techniques for maintaining high accuracy when training across multiple GPUs

Additional workshop datasheet:

https://www.nvidia.com/content/dam/en-zz/Solutions/deep-learning/deep-learning-education/fundamentals-of-deep-learning-for-multi-gpus-1246587-r6-web.pdf

Prerequisites: Experience with gradient descent model training

Technologies: TensorFlow, Keras, Horovod

Assessment Type: Code-based

Certificate: Upon successful completion of the assessment, participants will receive an NVIDIA DLI certificate to recognize their subject matter competency and support professional career growth.

Address

VCOM – Auburn Campus

910 S Donahue Dr, Room #127

Auburn, AL 36832

Date/Time

Wednesday, March 1st 

8:00am CT - 5:00pm CT