Mixed Precision Trainings¶
- class nnabla.experimental.mixed_precision_training.DynamicLossScalingUpdater(solver, loss, data_feeder=<function DynamicLossScalingUpdater.<lambda>>, scale=8.0, scaling_factor=2.0, N=2000, clear_buffer=True, accum_grad=1, weight_decay=None, comm=None, grads=)¶
Dynamic Loss Scaling Updater for the mixed precision training.
nnabla.solvers.Solver) – Solver object. E.g., Momentum or Adam.
nnabla.Variable) – Loss variable from which the forward and the backward is called.
object, function, or lambda) – Data feeder
float) – Loss scale constant. This is dynamically changing during training.
float) – Scaling factor for the dynamic loss scaling.
bool) – Clears the no longer referenced variables during backpropagation to save memory.
Interval, the number of iterations in training for increasing
Clears the no longer referenced variables during backpropagation to save memory.
The list of gradients to be exchanged when to do distributed training.
Monolithic update method.
This method calls the following methods with the dynamic loss scaling.
comm.all_reduce (if it is specified)