Mixed Precision Training¶
Traditionally, for training a neural network, we used to use
for weights and activations; however computation costs for training a
neural network rapidly increase over years as the success of deep
learning and the growing size of a neural network. It indicates that we
need to spend much more time for training a huge size of a neural
network while we would like to do lots of trials before a product
launch. To address this problem, companies (e.g., NVIDIA) introduced an
accelerator for speeding up computation. For example, NVIDIA Volta has
to speed up computation.
However, it uses
FP16 weights, activations, gradients, and the range
FP16 is very limited when compared to that of
that sometimes (or often) values of gradients overflow and/or underflow,
which affects the performance of a neural network or makes it collapse
Mixed precision training is one of the algorithms to circumvent that
problem while maintaining the same results that we could obtain with
FP32 networks. It is well-described in The Training with Mixed
and Mixed Precision Training.
This tutorial explains how to do the mixed precision training in NNabla step-by-step.
Basically, the mixed precision training are composed of three parts.
- Use the accelerator for computation (here we assume Tensor Cores)
- Use loss scaling to prevent underflow
- Use dynamic loss scaling to prevent overflow/underflow
In NNabla, we can do the correspondences as follows.
1. Use Tensor Cores¶
2. Use loss scaling to prevent underflow¶
3. Use dynamic loss scaling to prevent overflow/underflow¶
Note that currently the procedures of 2nd (Use loss scaling to prevent underflow) and 3rd (Use loss scaling to prevent overflow) are experimental, and we are now trying to speed up the mixed precision training, so API might change for future use, especially 3rd.
In the previous step-by-step example, the 3rd step is lengthy in a training loop, thus we can write a wrapper class like the following.
Then, call the update method in a training loop:
In the mixed-precision training, the followings are premise:
- Solver contains
FP16weights and the
FP32copy of weights. Solvers in NNabla hold
FP32weights and weight gradients and cast it to
FP16weights in forward pass and to
FP16weight gradients in backward pass if one sets
- Reductions should be left in
FP32, for examples, the statistics (mean and variance) computed by the batch-normalization, Mean, Sum, SoftMax, SoftMaxCrossEntropy, etc. (see The Training with Mixed Precision User Guide). In NNabla, these functions are automatically fallbacked to use