Tutorials

Learning tasks

How To

Reference

Background

How to find a good learning rate

Finding a good learning rate is very important when training models. If it’s too large, the training will diverge, and if it’s too low, training will be unnecessarily slow. We can use a heuristic called the learning rate finder to find a good learning rate. It works by training with a very low learning rate and exponentially increasing it very quickly until the loss diverges. By plotting learning rates against losses we can find a good a value. We’ll set up a Learner as usual:

using FastAI
data = Datasets.loadtaskdata(Datasets.datasetpath("imagenette2-160"), ImageClassificationTask)
method = ImageClassification(Datasets.getclassesclassification("imagenette2-160"), (160, 160))
learner = methodlearner(method, data, Models.xresnet18(), ToGPU(), Metrics(accuracy))

Then we use the LRFinderPhase which handles the exponential learning rate scheduling and early termination. It will also reset the model when it is done.

phase = FastAI.LRFinderPhase(steps = 100, β = 0.02)
fit!(learner, phase)

To visualize the results, you will need to import a plotting backend.

using CairoMakie
FastAI.plotlrfind(phase)

A good rule of thumb is to look at where the loss diverges and divide the learning rate at that point by 10. In this case this gives us a learning rate of about 0.1. Let’s use that to train our model:

fitonecycle!(learner, 5, 0.1)