Wrapper module for logging.

You can use the logger as follows:

from utils.logger import logger

logger.debug('Log message(DEBUG)')'Log message(INFO)')
logger.error('Log message(ERROR)')
logger.critical('Log message(CRITICAL)')

With the default settings, it should yield the following output:

$ python scripts/
[nnabla][ERROR]: : <module> : 5 : Log message(ERROR)
[nnabla][CRITICAL]: : <module> : 6 : Log message(CRITICAL)
$ cat /tmp/nbla.log
2017-01-19 14:41:35,132 [nnabla][DEBUG]: scripts/ : <module> : 3 : Log message(DEBUG)
2017-01-19 14:41:35,132 [nnabla][INFO]: scripts/ : <module> : 4 : Log message(INFO)
2017-01-19 14:41:35,132 [nnabla][ERROR]: scripts/ : <module> : 5 : Log message(ERROR)
2017-01-19 14:41:35,132 [nnabla][CRITICAL]: scripts/ : <module> : 6 : Log message(CRITICAL)

Auto-forward mode

NNabla provides the dynamic computation graph feature, which enables automatic forward propagation during graph construction. This can be enabled using the set_auto_forward() function. Backpropagation shall be manually executed on the dynamically constructed graph.

nnabla.auto_forward(*args, **kwds)[source]

Context for dynamic graph execution mode.

Parameters:auto (bool) – Whether forward computation is executed during a computation graph construction.

Returns: bool


Set the default mode for automatic forward propagation.

When it is set to True , forward propagation is invoked immediately when the computation graph is updted.

Parameters:auto (bool) – Whether forward computation is executed when the computation graph is updated.

Returns: bool


Get the state of automatic forward execution.

When it is true, forward execution is invoked during a computation graph definition.


This is called by users usually.


class nnabla.Context(backend=None, array_class='', device_id='0')

Context is used to specify the computation engine (cpu, cuda, cudnn etc.) which the function operator modules and optimizer modules shall be ran on. The context can be set for each function, as well as set globally with functions listed in the context-specifier().

  • backend (list of str) – ‘cpu’, ‘cuda’, ‘cudnn’ etc.
  • array_class (str) – str, ‘CpuArray’, ‘CpuCachedArray’, ‘CudaArray’, ‘CudaCachedArray’ etc.
  • device_id (str) – str, default ‘0’

Context Specifier API

nnabla.context_scope(*args, **kwds)[source]

Context as Python context.

import nnabla as nn
import nnabla.functions as F
x = nn.Variable([2, 3 ,4])
ctx = nnabla_ext.cuda.context('0')
with context_scope(ctx):
    # Inside with scope, the specified context is used.
    with parameter_scope('w1'):
        l1 = F.relu(F.affine(x, 64))
    with parameter_scope('w2'):
        l2 = F.relu(F.affine(x, 64))

Set the default context.


It cannot be called inside any context_scope.

Parameters:ctx (Context) – A Context.

Get the current context.

It can be set using nnabla.context_scope() or nnabla.set_default_context() .

Returns:a current context.
Return type:Context