Experimental¶
DynamicLossScalingUpdater¶

class
nnabla.experimental.mixed_precision_training.
DynamicLossScalingUpdater
(solver, loss, data_feeder=<function <lambda>>, scale=8.0, scaling_factor=2.0, N=2000, clear_buffer=True, accum_grad=1, weight_decay=None, comm=None, grads=[])[source]¶ Dynamic Loss Scaling Updater for the mixed precision training.
Parameters:  solver (
nnabla.solvers.Solver
) – Solver object. E.g., Momentum or Adam.  loss (
nnabla.Variable
) – Loss variable from which the forward and the backward is called.  data_feeder (callable
object
, function, or lambda) – Data feeder  scale (
float
) – Loss scale constant. This is dynamically changing during training.  scaling_factor (
float
) – Scaling factor for the dynamic loss scaling.  N (
int
) – Interval, the number of iterations in training for increasing loss scale by scaling_factor.  clear_buffer (
bool
) – Clears the no longer referenced variables during backpropagation to save memory.  accum_grad (
int
) – Number of accumulation of gradients. Update method of the solver is called after the accum_grad number of the forward and backward is called.  weight_decay (
float
) – Decay constant. Default is None, not applying the weight decay.  comm (
nnabla.communicators.Communicator
) – Communicator when to do distributed training. Default isNone
.  grads (
list
ofnnabla._nd_array.NdArray
) – The list of gradients to be exchanged when to do distributed training. Default is the emptylist
.

solver
¶ nnabla.solvers.Solver
– Solver object. E.g., Momentum or Adam.

loss
¶ nnabla.Variable
– Loss variable from which the forward and the backward is called.

N
¶ int
– Interval, the number of iterations in training for increasing loss scale by scaling_factor.

clear_buffer
¶ bool
– Clears the no longer referenced variables during backpropagation to save memory.

accum_grad
¶ int
– Number of accumulation of gradients. Update method of the solver is called after the accum_grad number of the forward and backward is called.

comm
¶ nnabla.communicators.Communicator
– Communicator when to do distributed training.

grads
¶ list
ofnnabla._nd_array.NdArray
– The list of gradients to be exchanged when to do distributed training.
Example
Reference:
 solver (
SimpleGraph¶

class
nnabla.experimental.viewers.
SimpleGraph
(format='png', verbose=False, fname_color_map=None, vname_color_map=None)[source]¶ Simple Graph with GraphViz.
Example:
import nnabla as nn import nnabla.functions as F import nnabla.parametric_functions as PF import nnabla.experimental.viewers as V # Model definition def network(image, test=False): h = image h /= 255.0 h = PF.convolution(h, 16, kernel=(3, 3), pad=(1, 1), name="conv") h = PF.batch_normalization(h, name="bn", batch_stat=not test) h = F.relu(h) pred = PF.affine(h, 10, name='fc') return pred # Model image = nn.Variable([4, 3, 32, 32]) pred = network(image, test=False) # Graph Viewer graph = V.SimpleGraph(verbose=False) graph.view(pred) graph.save(pred, "sample_grpah")

save
(vleaf, fpath, cleanup=False)[source]¶ Save the graph to a given file path.
Parameters:  vleaf (nnabla.Variable) – End variable. All variables and functions which can be traversed from this variable are shown in the reuslt.
 fpath (str) – The file path used to save.
 cleanup (bool) – Clean up the source file after rendering. Default is False.

view
(vleaf, fpath=None, cleanup=True)[source]¶ View the graph.
Parameters:  vleaf (nnabla.Variable) – End variable. All variables and functions which can be traversed from this variable are shown in the reuslt.
 fpath (str) – The file path used to save.
 cleanup (bool) – Clean up the source file after rendering. Default is True.
