Common

Logger

Wrapper module for logging.

You can use the logger as follows:

from utils.logger import logger

logger.debug('Log message(DEBUG)')
logger.info('Log message(INFO)')
logger.error('Log message(ERROR)')
logger.critical('Log message(CRITICAL)')

With the default settings, it should yield the following output:

$ python scripts/logger_test.py
[nnabla][ERROR]: logger_test.py : <module> : 5 : Log message(ERROR)
[nnabla][CRITICAL]: logger_test.py : <module> : 6 : Log message(CRITICAL)
$ cat /tmp/nbla.log
2017-01-19 14:41:35,132 [nnabla][DEBUG]: scripts/logger_test.py : <module> : 3 : Log message(DEBUG)
2017-01-19 14:41:35,132 [nnabla][INFO]: scripts/logger_test.py : <module> : 4 : Log message(INFO)
2017-01-19 14:41:35,132 [nnabla][ERROR]: scripts/logger_test.py : <module> : 5 : Log message(ERROR)
2017-01-19 14:41:35,132 [nnabla][CRITICAL]: scripts/logger_test.py : <module> : 6 : Log message(CRITICAL)
nnabla.logger.logger

Auto-forward mode

NNabla provides the dynamic computation graph feature, which enables automatic forward propagation during graph construction. This can be enabled using the set_auto_forward() function. Backpropagation shall be manually executed on the dynamically constructed graph.

nnabla.auto_forward(*args, **kwds)[source]

Context for dynamic graph execution mode.

Parameters:auto (bool) – Whether forward computation is executed during a computation graph construction.

Returns: bool

nnabla.set_auto_forward(auto)[source]

Set the default mode for automatic forward propagation.

When it is set to True , forward propagation is invoked immediately when the computation graph is updted.

Parameters:auto (bool) – Whether forward computation is executed when the computation graph is updated.

Returns: bool

nnabla.get_auto_forward()[source]

Get the state of automatic forward execution.

When it is true, forward execution is invoked during a computation graph definition.

Note

This is called by users usually.

Context

class nnabla.Context

Context is used to specify the computation engine (cpu, cuda, cuda.cudnn etc.) which the function operator modules and optimizer modules shall be ran on. The context can be set for each function, as well as set globally with functions listed in the context-specifier().

Parameters:
  • backend – str, ‘cpu’ or ‘cuda’
  • array_class – str, ‘CpuArray’, ‘CpuCachedArray’, ‘CudaArray’, ‘CudaCachedArray’
  • device_id – str, default ‘0’
  • compute_backend – str, ‘default’, ‘cudnn’

Context Specifier API

nnabla.context_scope(*args, **kwds)[source]

Context as Python context.

import nnabla as nn
import nnabla.functions as F
x = nn.Variable([2, 3 ,4])
ctx = nn.extensions.cuda.cudnn.context(0)
with context_scope(ctx):
    # Inside with scope, the specified context is used.
    with parameter_scope('w1'):
        l1 = F.relu(F.affine(x, 64))
    with parameter_scope('w2'):
        l2 = F.relu(F.affine(x, 64))
nnabla.set_default_context(ctx)[source]

Set the default context.

Note

It cannot be called inside any context_scope.

Parameters:ctx (Context) – A Context.
nnabla.get_current_context()[source]

Get the current context.

It can be set using nnabla.context_scope() or nnabla.set_default_context() .

Returns:a current context.
Return type:Context