Join your friends at AAICE for this exciting follow-up on NVIDIA Fundamentals of Deep Learning for Multi-GPUs!
In this workshop, you’ll learn how deep learning works through hands-on exercises in computer vision and natural language processing. You’ll train deep learning models from scratch, learning tools and tricks to achieve highly accurate results. You’ll also learn to leverage freely available, state-of-the-art pre-trained models to save time and get your deep learning application up and running quickly.
At the conclusion of the workshop, you’ll have an understanding of:
- Stochastic gradient descent (SGD), a crucial tool in parallelized training
- Batch size and its effect on training time and accuracy
- Transforming a single-GPU implementation to a Horovod multi-GPU implementation
- Techniques for maintaining high accuracy when training across multiple GPUs
Additional workshop datasheet:
Prerequisites: Experience with gradient descent model training
Technologies: TensorFlow, Keras, Horovod
Assessment Type: Code-based
Certificate: Upon successful completion of the assessment, participants will receive an NVIDIA DLI certificate to recognize their subject matter competency and support professional career growth.