namespace nbla::utils

namespace utils

Utils.

NNabla utilities.

Functions

NBLA_API bool load_parameters (ParameterDirectory &pd, string filename)

Load parameters from a parameter file(*.h5, *.protobuf, *.prototxt, *.pb and so on)

NBLA_API bool save_parameters (ParameterDirectory &pd, string filename)

Save parameters to specified parameter file.

NBLA_API bool load_parameters_h5 (ParameterDirectory &pd, char *buf, int size)

Load parameters from a file buffer, specified by buffer address and length.

This buffer should contain the parameter file data load from a .h5 file.

NBLA_API bool load_parameters_pb (ParameterDirectory &pd, char *buf, int size)

Load parameters from a file buffer, specified by buffer address and length.

This buffer should contain the parameter file data load from a .protobuf file.

NBLA_API bool save_parameters_pb (ParameterDirectory &pd, char *buf, unsigned int &size)

Save parameters to a buffer, specified by buffer address and length.

NBLA_API bool save_parameters_h5 (ParameterDirectory &pd, char *buf, unsigned int &size)

Save parameters to a file buffer, specified by buffer address and length.

namespace nnp

NNabla format file utilities.

class Network
#include <nnp.hpp>

Network object associated with Nnp object.

The following code will get Network instance from Nnp object nnp, and set batch size.

shared_ptr<Network> network = nnp.get_network("net1");
network.set_batch_size(64);

The next block will get the references to the variable object in the computation graph by name. The computation graph is built when get_variable(name) is called first time, or first time since the batch size or network topology is changed.

nbla::CgVariablePtr x = network.get_variable("input");
nbla::CgVariablePtr y = network.get_variable("output");

You can set data to a variable by accessing array data by using NNabla C++ interface.

float *data = x->variable()->cast_data_and_get_pointer<float>(
    dl.Context().set_array_class("CpuCachedArray"));
for (int i = 0; i < x->variable()->size(); i++) {
    data[i] = ...;  // Set data
}

The forward propagation of the network can be executed at any variable by calling forward method. The function execution will be propagated from root (input) variables to to the variable.

y->forward(true);

Getting and displaying output are as follows.

const float *out = y->variable()->get_data_pointer<float>(
    dl.Context().set_array_class("CpuCachedArray"));
for (int i = 0; i < y->variable()->size(); i++) {
    std::cout << out[i] << ",";
}
std::cout << std::endl;

Note

The Network instance is created by a class member function Nnp::get_network(). The constructor is hidden, and not called directly by users.

Public Functions

NBLA_API string name () const

Network name.

NBLA_API void set_batch_size (int batch_size)

Set batch size.

Parameters:

batch_size[in] Overwrite the default batch size in nnp file.

NBLA_API int batch_size () const

Get batch size.

Return values:

Batch – size. The if set_batch_size is not previously called, batch size written in nnp file will be returned.

NBLA_API void replace_variable (const string &name, CgVariablePtr variable)

Replace an arbitrary variable in the network with a given variable.

The predecessors of the variable in the networks are discarded, and replaced with the predecessors of the given variable.

Parameters:
  • name[in] Name of variable in the network you are replacing.

  • variable[in] Replaced with this.

NBLA_API CgVariablePtr get_variable (const string &name)

Get a variable by name.

This is usually used to set or get data inside the variable. The construction of a computation graph is invoked by calling this if the graph is not latest or not created.

Parameters:

name[in] Name of variable in the network.

Return values:

Variable – in a computation graph.

class Executor
#include <nnp.hpp>

Executor associated with Nnp object.

The Executor object internally stores a Network object.

Public Functions

NBLA_API string name () const

Executor name.

NBLA_API string network_name () const

Network name.

NBLA_API void set_batch_size (int batch_size)

Set batch size.

Parameters:

batch_size[in] Overwrite the default batch size in Network.

NBLA_API int batch_size () const

Get batch size.

Return values:

Batch – size. The if set_batch_size is not previously called, batch size written in the Network of NNabla format file will be returned.

NBLA_API vector< DataVariable > get_data_variables ()

Get data variables.

Return values:

Data – variables where each item holds name info and CgVariable instance in the Network. The data inside the CgVariable should be gotten via Nnabla C++ interface.

NBLA_API vector< OutputVariable > get_output_variables ()

Get output variables.

Return values:

Output – variables where each item holds name info and CgVariable instance in the Network. The data inside the CgVariable should be gotten via Nnabla C++ interface.

NBLA_API shared_ptr< Network > get_network ()

Get the reference (shared_ptr) of Network object held in this.

NBLA_API void execute ()

Execute the network from inputs to outputs.

struct DataVariable
#include <nnp.hpp>

Data variable container.

The string fields corresponds to DataVariable in proto definition.

struct OutputVariable
#include <nnp.hpp>

Output variable container.

The string fields corresponds to OutputVariable in proto definition.

class Optimizer

Public Functions

NBLA_API string name () const

Optimizer name.

NBLA_API string network_name () const

Network name.

NBLA_API const int update_interval () const

Update interval.

NBLA_API shared_ptr< Network > get_network ()

Get the reference (shared_ptr) of Network object held in this.

NBLA_API const float update (const int iter)

Execute update operations including forward, backward and update.

Parameters:

iter – Number of iteration.

Return values:

loss – value.

struct DataVariable
#include <nnp.hpp>

Data variable container.

The string fields corresponds to DataVariable in proto definition.

struct GeneratorVariable
#include <nnp.hpp>

Generator variable container.

The string fields corresponds to GeneratorVariable in proto definition.

struct LossVariable
#include <nnp.hpp>

Loss variable container.

The string fields corresponds to LossVariable in proto definition.

struct ParameterVariable
#include <nnp.hpp>

Parameter variable container.

The string fields corresponds to ParameterVariable in proto definition.

class Monitor

Public Functions

NBLA_API string name () const

Monitor name.

NBLA_API string network_name () const

Network name.

shared_ptr<Network> get_network()

Get the reference (shared_ptr) of Network object held in this.

NBLA_API const float monitor_epoch ()

Execute monitor operations including forward process.

Return values:

monitored – value.

struct DataVariable
#include <nnp.hpp>

Data variable container.

The string fields corresponds to DataVariable in proto definition.

struct MonitorVariable
#include <nnp.hpp>

Monitor variable container.

The string fields corresponds to OutputVariable in proto definition.

class TrainingConfig

Public Functions

NBLA_API const long long int max_epoch () const

Max epoch.

NBLA_API const long long int iter_per_epoch () const

Iteration per epoch.

NBLA_API const bool save_best () const

Save best.

class Nnp
#include <nnp.hpp>

Handle of NNabla format files.

You can create an Nnp object by passing default context as below.

using namespace nbla::utils::nnp;
nbla::Context ctx{"cpu", "CpuCachedArray", "0", "default"};
Nnp nnp(ctx);

Suppose we have network.nnp which is previously created. You can add a previously dumped NNabla format file to Nnp object. Nnp will parse the file format and internally store the information such as network architectures, learned parameters and execution settings.

nnp.add("network.nnp");

Suppose a network “net1” is in network.npp. The following line will create a Network object from the nnp file. Network can create a computation graph defined in NNabla format files. The created computation graph can be executed in C++ code. See Network doc for the usage.

shared_ptr<Network> network = nnp.get_network("net1");

... // Use network here.

Suppose an executor “exe1” is in network.npp. The following line will create a Executor object from NNabla format files. The Executor can also create a computation graph of a network associated with the Executor field in NNabla format files. The Executor provides easier interface to set input, execute the graph, and get output.

shared_ptr<Executor> executor = nnp.get_executor("exe1");

... // Use executor here.

Public Functions

NBLA_API Nnp(const nbla::Context &ctx)

Constructor which sets default context.

Parameters:

ctx[in] Default context which overwrites the config in nnp file.

NBLA_API bool add (const string &filename)

Add nnp|nntxt|h5 file.

NBLA_API bool add (char *buffer, unsigned int size)

Add nnp on memory.

NBLA_API vector< string > get_network_names ()

Get Network name list from added files (nnp, nntxt etc.).

Return values:

A – vector of Network instance names.

NBLA_API shared_ptr< Network > get_network (const string &name)

Get Network object from added files (nnp, nntxt etc.).

Parameters:

name[in] Network name in loaded files (nnp, nntxt etc.)

Return values:

A – shared pointer of a Network instance.

NBLA_API vector< string > get_executor_names ()

Get Executor name list from added files (nnp, nntxt etc.).

Return values:

A – vector of Executor instance names.

NBLA_API shared_ptr< Executor > get_executor (const string &name)

Get Executor object from added file(s).

Parameters:

name[in] Executor name in loaded files (nnp, nntxt etc.)

Return values:

A – shared pointer of a Executor instance.

NBLA_API vector< string > get_optimizer_names ()

Get Optimizer name list from added files (nnp, nntxt etc.).

Return values:

A – vector of Optimizer instance names.

NBLA_API shared_ptr< Optimizer > get_optimizer (const string &name)

Get Optimizer object from added files (nnp, nntxt etc.).

Parameters:

name[in] Optimizer name in loaded files (nnp, nntxt etc.)

Return values:

A – shared pointer of a Optimizer instance.

NBLA_API vector< string > get_monitor_names ()

Get Monitor name list from added files (nnp, nntxt etc.).

Return values:

A – vector of Monitor instance names.

NBLA_API shared_ptr< Monitor > get_monitor (const string &name)

Get Monitor object from added files (nnp, nntxt etc.).

Parameters:

name[in] Monitor name in loaded files (nnp, nntxt etc.)

Return values:

A – shared pointer of a Monitor instance.

NBLA_API shared_ptr< TrainingConfig > get_training_config ()

Get TrainingConfig object from added files (nnp, nntxt etc.).

Return values:

A – shared pointer of a TrainingConfig instance.

NBLA_API vector< pair< string, VariablePtr > > get_parameters ()

Get parameters.

Return values:

A – vector of string and variable pointer pairs.

NBLA_API bool save_parameters (const string &filename)

Save parameters.

Parameters:

name[in] output binary filename (.protobuf or .h5)

namespace rnn

Functions

inline void compute_batch_sizes(const int *lengths, int lsize, int *batch_sizes)
template<typename U, bool accum = false>
inline void pack(const U *padded_sequence, const int *batch_sizes, U *packed_sequence, int T, int B, int D)
template<typename U, bool accum = false>
inline void pack_batch_first(const U *padded_sequence, const int *batch_sizes, U *packed_sequence, int T, int B, int D)
template<typename U, bool accum = false>
inline void pack_batch_first(const U *padded_sequence, const int *batch_sizes, U *packed_sequence, int T, int B, int D, int TL)
inline void compute_lengths(const int *batch_sizes, int bsize, int *lengths)
template<typename U, bool accum = false>
inline void unpack(const U *packed_sequence, const int *batch_sizes, U *padded_sequence, int T, int B, int D)
template<typename U, bool accum = false>
inline void unpack(const U *packed_sequence, const int *batch_sizes, U *padded_sequence, int T, int B, int D, int TL)
template<typename U, bool accum = false>
inline void unpack_batch_first(const U *packed_sequence, const int *batch_sizes, U *padded_sequence, int T, int B, int D)
template<typename U, bool accum = false>
inline void unpack_batch_first(const U *packed_sequence, const int *batch_sizes, U *padded_sequence, int T, int B, int D, int TL)