Grad¶
-
nnabla.grad.
grad
(outputs, inputs, grad_outputs=None, persistent_outputs=[], bind_grad_output=False)[ソース]¶ 入力に対する出力の勾配関数。
grad 関数は入力に対する出力の勾配の和を計算します。
\[g_i = \sum_{j} {\frac{\partial y_j}{\partial x_i}},\]\(y_j\) は各出力、 \(x_i\) は各入力、 \(g_i\) は \(j\) 全体の \(x_i\) に対する \(y_j\) の勾配の和です。
- パラメータ
grad_outputs (None, scalar,
numpy.ndarray
,nnabla.NdArray
, or list of scalar,numpy.ndarray
, ornnabla.NdArray
,) -- Gradient outputs corresponding to outputs. This is same as the grad argument ofbackward()
. Default is None, so 1 is used as the in-coming gradient at the very beginning of the Variable in the backward graph.persistent_outputs (list of
bool
) -- 出力の persistent フラグを指定します。指定がない場合は、すべての出力は persistent になります。bind_grad_output (
bool
) -- Bind data to grad of input variable. This is useful for the case where one wants to use the backward graph for training a neural network using the first-order gradients only. Default is False.
- 戻り値
List of :obj:`~nnabla.Variable`s.
バックプロパゲーションが入力に到達しなければ、対応する戻り値は None です。
例 :
import nnabla as nn import nnabla.functions as F import nnabla.parametric_functions as PF import numpy as np from nnabla.ext_utils import get_extension_context # Context extension_module = "cudnn" ctx = get_extension_context(extension_module) # Input and label x = nn.Variable.from_numpy_array(np.random.randn(4, 3, 32, 32)) y = nn.Variable.from_numpy_array(np.random.randint(0, 10, 4).reshape(4, 1)) # Network h = PF.convolution(x, 8, (3, 3), (1, 1), name="conv1") h = F.relu(h) h = F.max_pooling(h, (2, 2)) h = PF.convolution(h, 16, (3, 3), (1, 1), name="conv2") h = F.relu(h) h = F.max_pooling(h, (2, 2)) p = PF.affine(h, 10, name="pred") loss = F.mean(F.softmax_cross_entropy(p, y)) # Grad outputs = [loss] inputs = nn.get_parameters().values() grads = nn.grad(outputs, inputs) # gradients of the parameters # Double backward of the outputs w.r.t. the parameters by constraining the gradient norms. gp = sum([F.sum(g ** 2.0) ** 0.5 for g in grads]) loss += gp loss.forward() loss.backward()