How to tune your models banner

Topics: The following topics are covered in this module:

  • Gradient Descent
  • Transfer Learning
  • Underfitting and Overfitting
  • Fine Tuning

Objectives

By the end of this module, students will be able to:

  1. Describe the purpose and process of gradient descent.
  2. Discuss the error loss function.
  3. Describe optimizers.
  4. Recognize signs of underfitting and overfitting.
  5. Experiment with hyperparameter tuning.

Watch

Thumbnail screenshot of a Practicum AI video Video: How to Tune Your Models

Topics

Hands-on Exercise

Through this exercise, you’ll gain hands-on experience applying deep learning to solve another image recognition task. As discussed above, the loss function, optimizer, and more are all hyperparameters that can be adjusted to try to train a better model. You will get some experience with hyperparameter optimization.

Work on notebook 03_bees_vs_wasps.ipynb.

Conclusion

In this module, we explored essential concepts and techniques in machine learning optimization:

  1. Basics of Gradient Descent: Introduced the algorithm as a method for minimizing loss functions in machine learning models.
  2. Loss Functions: Discussed various types, such as Mean Squared Error and Cross-Entropy, highlighting their role in model accuracy.
  3. Optimizers: Covered different optimizers like Stochastic Gradient Descent (SGD) and Adam, focusing on their application in refining the learning process.
  4. Transfer Learning & Fine Tuning: Touched on these important, advanced deep learning techniques.
  5. Deep Learning Implementation: Explored hyperparameter optimization to train better models.

This module provided a foundational understanding of how gradient descent drives the learning process in machine learning and deep learning models.