I have been playing around with neural networks for quite a while now, and recently came across the terms "freezing" & "unfreezing" the layers before training a neural network while reading about transfer learning & am struggling with understanding their usage.
- When is one supposed to use freezing/unfreezing?
- Which layers are to freezed/unfreezed? For instance, when I import a pre-trained model & train it on my data, is my entire neural-net except the output layer freezed?
- How do I determine if I need to unfreeze?
- If so how do I determine which layers to unfreeze & train to improve model performance?