As you might imagine, there are many types of techniques for implementing transfer learning. We’ll cover more extensively the types of transfer learning in later modules. For now, let’s cover the terms for the main types:
- Fine-Tuning: Adapting the pre-trained model to the new task by unfreezing some or all its layers and retraining with the new dataset, typically with a small learning rate to prevent large changes in parameter values.
- Parameter-efficient Fine-tuning (or “peft”): A specialized fine-tuning technique that adapts pre-trained models by only changing some of the parameters or introducing layers to the model and only training those. There are many, many methods that fall into this category, more than we can cover in one series. In these modules we’ll be using Low-Rank Adaptation, or “LoRA” as a representative example of pefts.
- Feature Extraction: In this technique, the pre-trained model’s layers are used as a fixed feature extractor. All layers of the base model are frozen, and only task-specific layers are trained on the new dataset. For example, in Python using TensorFlow, you can load a pre-trained model like ResNet and freeze its layers by setting trainable=False. Add a dense layer at the top with the number of target classes and compile the model to train it. This method is efficient when the target domain closely aligns with the source domain, such as using ImageNet-trained models for animal classification tasks.
Return to Module 1 or Continue to Key Uses of Transfer Learning