Trainers

class nnabla.experimental.trainers.Trainer(updater=None, evaluator=None, model_save_path=None, max_epoch=1, iter_per_epoch=None, callback_on_start=<function Trainer.<lambda>>, callback_on_finish=<function Trainer.<lambda>>, update_callback_on_start=<function Trainer.<lambda>>, update_callback_on_finish=<function Trainer.<lambda>>)[source]

Trainer API

Trainer class is the very basic class for training neural network. You can composite this class to your own trainer class and delegate the train method of this class to your class.

Parameters:
  • updater (Updater or list of Updater) – Updater object.

  • evaluator (Evaluator or list of Evaluator) – Evaluator object.

  • model_save_path (str) – Model save path.

  • max_epoch (int) – Max epoch to train.

  • iter_per_epoch (int, optional) – Iterations per one epoch.

  • callback_on_start (callable object, function, lambda, or list of these, optional) – Callback called before the trainer.train.

  • callback_on_finish (callable object, function, lambda, or list of these, optional) – Callback called after the trainer.train.

  • update_callback_on_start (callable object, function, lambda, or list of these, optional) – Callback called before the updater.update.

  • update_callback_on_finish (callable object, function, lambda, or list of these, optional) – Callback called after the updater.update.

The following example is a complete snippet to use this base trainer.

Example

import nnabla as nn
import nnabla.functions as F
import nnabla.parametric_functions as PF
import nnabla.solvers as S

from nnabla.monitor import Monitor, MonitorSeries, MonitorTimeElapsed

import numpy as np

from nnabla.experimental.trainers import Trainer, Updater, Evaluator

# Batch, channel, height, width
b, c, h, w = 32, 1, 128, 128

# Train Input
tinput = nn.Variable([b, c, h, w])
tlabel = nn.Variable([b, c, h, w])

# Train Model and Loss
tpred = <training model>.apply(persistent=True)
tloss = F.mean(F.softmax_cross_entropy(tpred, tlabel))

# Test Input
vinput = nn.Variable([b, c, h, w])
vlabel = nn.Variable([b, c, h, w])

# Test Model and Error
vpred = <evaluation model>.apply(persistent=True)
vloss = F.mean(F.softmax_cross_entropy(vpred, vlabel))
verror = F.mean(F.top_n_error(vpred.get_unlinked_variable(), vlabel))

# Solver
solver = S.Adam()
solver.set_parameters(nn.get_parameters())

# DataIterator
tdata = <training_data_iterator>
vdata = <validation_data_iterator>

# Monitor
monitor = Monitor(<monitor_path>)
monitor_loss = MonitorSeries("Training loss", monitor, interval=10)
monitor_err = MonitorSeries("Training error", monitor, interval=10)
monitor_time = MonitorTimeElapsed("Training time", monitor, interval=100)
monitor_verr = MonitorSeries("Valid error", monitor, interval=10)

# Updater
def tdata_feeder():
    tinput.d, tlabel.d = tdata.next()
def update_callback_on_finish(i):
    monitor_loss.add(i, tloss.d)
    monitor_time.add(i)
updater = Updater(solver, tloss,
                  data_feeder=tdata_feeder,
                  update_callback_on_finish=update_callback_on_finish)

# Evaluator
def vdata_feeder():
    vinput.d, vlabel.d = vdata.next()
def eval_callback_on_finish(i, ve):
    monitor_verr.add(i, ve)
evaluator = Evaluator(verror,
                      data_feeder=vdata_feeder,
                      val_iter=vdata.size // b,
                      callback_on_finish=eval_callback_on_finish)

# Trainer
trainer = Trainer(updater, evaluator, <model_save_path>,
                  max_epoch=<max_epoch>, iter_per_epoch=tdata.size // b)
trainer.train()
class nnabla.experimental.trainers.NaiveClassificationTrainer(solver, tinput=None, tlabel=None, tpred=None, tdata=None, vinput=None, vlabel=None, vpred=None, vdata=None, monitor_path=None, model_save_path=None, max_epoch=1, iter_per_epoch=None, val_iter=None)[source]

Naive Classification Trainer

Parameters:
  • solver (Solver) – Solver object.

  • tinput (Variable) – Input variable for input feature in training.

  • tlabel (Variable) – Label variable for lable in training.

  • tpred (Variable) – Root variable for prediction in the training graph.

  • tdata (nnabla.utils.data_iterator.DataIterator) – DataIterator for training.

  • vinput (Variable) – Input variable for input feature in evaluation.

  • vlabel (Variable) – Label variable for label in evaluation.

  • vpred (Variable) – Root variable for prediction in the evaluation graph.

  • vdata (DataIterator) – DataIterator for evaluation.

  • monitor_path (str) – Monitor path.

  • model_save_path (str) – Model save path.

  • max_epoch (int) – Max epoch to train.

  • iter_per_epoch (int, optional) – Iterations per one epoch. If not set, this value are determined by tdata.size // tdata.batch_size.

  • val_iter (int, optional) – Iterations for evaluation. If not set, this value are determined by vdata.size // vdata.batch_size.

The following example is a complete snippet to use this base trainer.

Example

import nnabla as nn
import nnabla.functions as F
import nnabla.parametric_functions as PF
import nnabla.solvers as S

import numpy as np

from nnabla.experimental.trainers import NaiveClassificationTrainer

# Batch, channel, height, width
b, c, h, w = 32, 1, 128, 128

# Train Input
tinput = nn.Variable([b, c, h, w])
tlabel = nn.Variable([b, c, h, w])

# Train Model and Loss
tpred = <training model>

# Test Input
vinput = nn.Variable([b, c, h, w])

# Test Model
vpred = <evaluation model>

# Solver
solver = S.Adam()
solver.set_parameters(nn.get_parameters())

# DataIterator
tdata = <training_data_iterator>
vdata = <validation_data_iterator>

# Trainer
trainer = NaiveClassificationTrainer(solver,
                                     tinput, tlabel, tpred, tdata,
                                     vinput, vlabel, vpred, vdata,
                                     <monitor_path>,
                                     <model_save_path>,
                                     max_epoch=<max_epoch>)
trainer.train()
class nnabla.experimental.trainers.NaiveRegressionTrainer(solver, tinput=None, tlabel=None, tpred=None, tdata=None, vinput=None, vlabel=None, vpred=None, vdata=None, monitor_path=None, model_save_path=None, max_epoch=1, iter_per_epoch=None, val_iter=None)[source]

Naive Regression Trainer

Parameters:
  • solver (Solver) – Solver object.

  • tinput (Variable) – Input variable for input feature in training.

  • tlabel (Variable) – Label variable for lable in training.

  • tpred (Variable) – Root variable for prediction in the training graph.

  • tdata (nnabla.utils.data_iterator.DataIterator) – DataIterator for training.

  • vinput (Variable) – Input variable for input feature in evaluation.

  • vlabel (Variable) – Label variable for label in evaluation.

  • vpred (Variable) – Root variable for prediction in the evaluation graph.

  • vdata (DataIterator) – DataIterator for evaluation.

  • monitor_path (str) – Monitor path.

  • model_save_path (str) – Model save path.

  • max_epoch (int) – Max epoch to train.

  • iter_per_epoch (int, optional) – Iterations per one epoch. If not set, this value are determined by tdata.size // tdata.batch_size.

  • val_iter (int, optional) – Iterations for evaluation. If not set, this value are determined by vdata.size // vdata.batch_size.

Example

import nnabla as nn
import nnabla.functions as F
import nnabla.parametric_functions as PF
import nnabla.solvers as S

import numpy as np

from nnabla.experimental.trainers import NaiveRegressionTrainer

# Batch, channel, height, width
b, c, h, w = 32, 1, 128, 128

# Train Input
tinput = nn.Variable([b, c, h, w])
tlabel = nn.Variable([b, c, h, w])

# Train Model and Loss
tpred = <training model>

# Test Input
vinput = nn.Variable([b, c, h, w])
vlabel = nn.Variable([b, c, h, w])

# Test Model
vpred = <evaluation model>

# Solver
solver = S.Adam()
solver.set_parameters(nn.get_parameters())

# DataIterator
tdata = <training_data_iterator>
vdata = <validation_data_iterator>

# Trainer
trainer = NaiveRegressionTrainer(solver,
                                 tinput, tlabel, tpred, tdata,
                                 vinput, vlabel, vpred, vdata,
                                 <monitor_path>,
                                 <model_save_path>,
                                 max_epoch=<max_epoch>)
trainer.train()
class nnabla.experimental.trainers.Updater(solver=None, loss=None, data_feeder=<function Updater.<lambda>>, forward_callback_on_start=<function Updater.<lambda>>, forward_callback_on_finish=<function Updater.<lambda>>, backward_callback_on_start=<function Updater.<lambda>>, backward_callback_on_finish=<function Updater.<lambda>>, comm_callback_on_start=<function Updater.<lambda>>, comm_callback_on_finish=<function Updater.<lambda>>, update_callback_on_start=<function Updater.<lambda>>, update_callback_on_finish=<function Updater.<lambda>>, clear_buffer=True, accum_grad=1, comm=None, grads=[])[source]
Parameters:
  • solver (nnabla.solvers.Solver) – Solver object. E.g., Momentum or Adam.

  • loss (nnabla.Variable) – Loss variable from which the forward and the backward is called.

  • data_feeder (callable object, function, or lambda) – Data feeder.

  • forward_callback_on_start (callable object, function, lambda, or list of these, optional) – Callback called before forward function.

  • forward_callback_on_finish (callable object, function, lambda, or list of these, optional) – Callback called after forward function.

  • backward_callback_on_start (callable object, function, lambda, or list of these, optional) – Callback called before backward function.

  • backward_callback_on_finish (callable object, function, lambda, or list of these, optional) – Callback called after backward function.

  • comm_callback_on_start (callable object, function, lambda, or list of these, optional) – Callback called before comm.all_reduce.

  • comm_callback_on_finish (callable object, function, lambda, or list of these, optional) – Callback called after comm.all_reduce.

  • update_callback_on_start (callable object, function, lambda, or list of these, optional) – Callback called before update function.

  • update_callback_on_finish (callable object, function, lambda, or list of these, optional) – Callback called after update function.

  • clear_buffer (bool, optional) – Clears the no longer referenced variables during backpropagation to save memory.

  • accum_grad (int, optional) – Number of accumulation of gradients. Update method of the Solver is called after the accum_grad number of the forward and backward is called. Default is 1.

  • comm (nnabla.communicators.Communicator, optional) – Communicator when to do distributed training. Default is None.

  • grads (list of nnabla.NdArray, optional) – The list of gradients to be exchanged when to do distributed training. Default is the empty list.

Example

from nnabla.experimental.trainers import Updater

solver = <Solver>
loss = <Loss Variable of Network>

def tdata_feeder():
    ...
def update_callback_on_finish(i):
    ...
updater = Updater(solver, loss, tdata_feeder, updater_callback_on_finish)

# Training iteration
for itr in range(<max_iter>):
    updater.update()
update(i)[source]

Monolithic update method.

This method calls the following methods with the dynamic loss scaling.

  1. solver.zerograd

  2. feed data

  3. loss.forward

  4. loss.backward

  5. comm.all_reduce (if it is specified)

  6. solver.update

class nnabla.experimental.trainers.Evaluator(vroot=None, data_feeder=None, val_iter=None, callback_on_start=<function Evaluator.<lambda>>, callback_on_finish=<function Evaluator.<lambda>>, clear_buffer=True, comm=None)[source]
Parameters:
  • vroot (Variable) – Root varible of the evaluation graph.

  • data_feeder (callable object, function, or lambda) – Data feeder.

  • val_iter (int, optional) – Iterations for evaluation.

  • callback_on_start (callable object, function, lambda, or list of these, optional) – Callback called before the evaluator.evalute.

  • callback_on_finish (callable object, function, lambda, or list of these, optional) – Callback called after the evaluator.evalute.

  • clear_buffer (bool, optional) – Clears the no longer referenced variables during backpropagation to save memory.

  • comm (nnabla.communicators.Communicator, optional) – Communicator when to do distributed training. Default is None.

Example

from nnabla.experimental.trainers import Evaluator

# Evaluator
def vdata_feeder():
    ...
def eval_callback_on_finish(i, ve):
    ...
evaluator = Evaluator(verror,
                      data_feeder=vdata_feeder,
                      val_iter=<val_iter>,
                      callback_on_finish=eval_callback_on_finish)