NNP save and load utilities

IMPORTANT NOTICE: To handle NNP file from Neural Network Console, if the network you want to save/load contains LoopControl functions RepeatStart, RepeatEnd, RecurrentInput, RecurrentOutput or Delay, you must expand network with File format converter.

nnabla.utils.save.save(filename, contents, include_params=False, variable_batch_size=True, extension='.nnp', parameters=None, include_solver_state=False, solver_state_format='.h5')[source]

Save network definition, inference/training execution configurations etc.

Parameters:
  • filename (str or file object) –

    Filename to store information. The file extension is used to determine the saving file format. .nnp: (Recommended) Creating a zip archive with nntxt (network definition etc.) and h5 (parameters). .nntxt: Protobuf in text format. .protobuf: Protobuf in binary format (unsafe in terms of

    backward compatibility).

  • contents (dict) – Information to store.

  • include_params (bool) – Includes parameter into single file. This is ignored when the extension of filename is nnp.

  • variable_batch_size (bool) – By True, the first dimension of all variables is considered as batch size, and left as a placeholder (more specifically -1). The placeholder dimension will be filled during/after loading.

  • extension – if files is file-like object, extension is one of “.nntxt”, “.prototxt”, “.protobuf”, “.h5”, “.nnp”.

  • include_solver_state (bool) – Indicate whether to save solver state or not.

  • solver_state_format (str) – ‘.h5’ or ‘.protobuf’, default ‘.h5’, indicate in which format will solver state be saved, notice that this option only works when save network definition in .nnp format and include_solver_state is True.

Example

The following example creates a two inputs and two outputs MLP, and save the network structure and the initialized parameters.

import nnabla as nn
import nnabla.functions as F
import nnabla.parametric_functions as PF
from nnabla.utils.save import save

batch_size = 16
x0 = nn.Variable([batch_size, 100])
x1 = nn.Variable([batch_size, 100])
h1_0 = PF.affine(x0, 100, name='affine1_0')
h1_1 = PF.affine(x1, 100, name='affine1_0')
h1 = F.tanh(h1_0 + h1_1)
h2 = F.tanh(PF.affine(h1, 50, name='affine2'))
y0 = PF.affine(h2, 10, name='affiney_0')
y1 = PF.affine(h2, 10, name='affiney_1')

contents = {
    'networks': [
        {'name': 'net1',
         'batch_size': batch_size,
         'outputs': {'y0': y0, 'y1': y1},
         'names': {'x0': x0, 'x1': x1}}],
    'executors': [
        {'name': 'runtime',
         'network': 'net1',
         'data': ['x0', 'x1'],
         'output': ['y0', 'y1']}]}
save('net.nnp', contents)

To get a trainable model, use following code instead.

contents = {
'global_config': {'default_context': ctx},
'training_config':
    {'max_epoch': args.max_epoch,
     'iter_per_epoch': args_added.iter_per_epoch,
     'save_best': True},
'networks': [
    {'name': 'training',
     'batch_size': args.batch_size,
     'outputs': {'loss': loss_t},
     'names': {'x': x, 'y': t, 'loss': loss_t}},
    {'name': 'validation',
     'batch_size': args.batch_size,
     'outputs': {'loss': loss_v},
     'names': {'x': x, 'y': t, 'loss': loss_v}}],
'optimizers': [
    {'name': 'optimizer',
     'solver': solver,
     'network': 'training',
     'dataset': 'mnist_training',
     'weight_decay': 0,
     'lr_decay': 1,
     'lr_decay_interval': 1,
     'update_interval': 1}],
'datasets': [
    {'name': 'mnist_training',
     'uri': 'MNIST_TRAINING',
     'cache_dir': args.cache_dir + '/mnist_training.cache/',
     'variables': {'x': x, 'y': t},
     'shuffle': True,
     'batch_size': args.batch_size,
     'no_image_normalization': True},
    {'name': 'mnist_validation',
     'uri': 'MNIST_VALIDATION',
     'cache_dir': args.cache_dir + '/mnist_test.cache/',
     'variables': {'x': x, 'y': t},
     'shuffle': False,
     'batch_size': args.batch_size,
     'no_image_normalization': True
     }],
'monitors': [
    {'name': 'training_loss',
     'network': 'validation',
     'dataset': 'mnist_training'},
    {'name': 'validation_loss',
     'network': 'validation',
     'dataset': 'mnist_validation'}],
}
class nnabla.utils.nnp_graph.NnpLoader(filepath, scope=None, extension='.nntxt')[source]

An NNP file loader.

Parameters:
  • filepath – file-like object or filepath.

  • extension – if filepath is file-like object, extension is one of “.nnp”, “.nntxt”, “.prototxt”.

Example

from nnabla.utils.nnp_graph import NnpLoader

# Read a .nnp file.
nnp = NnpLoader('/path/to/nnp.nnp')
# Assume a graph `graph_a` is in the nnp file.
net = nnp.get_network(network_name, batch_size=1)
# `x` is an input of the graph.
x = net.inputs['x']
# 'y' is an outputs of the graph.
y = net.outputs['y']
# Set random data as input and perform forward prop.
x.d = np.random.randn(*x.shape)
y.forward(clear_buffer=True)
print('output:', y.d)
get_network(name, batch_size=None, callback=None)[source]

Create a variable graph given network by name

Returns: NnpNetwork

get_network_names()[source]

Returns network names available.

class nnabla.utils.nnp_graph.NnpNetwork(proto_network, batch_size, callback)[source]

A graph object which is read from nnp file.

An instance of NnpNetwork is usually created by an NnpLoader instance. See an example usage described in NnpLoader.

variables

A dict of all variables in a created graph with a variable name as a key, and a nnabla.Variable as a value.

Type:

dict

inputs

All input variables.

Type:

dict

outputs

All output variables.

Type:

dict