nn.functional.normalize

lucid.nn.functional.normalize(input_: Tensor, ord: int = 2, axis: int = 1, eps: float = 1e-12) Tensor

The normalize function applies Lp-normalization to the input tensor along the specified dimension.

Function Signature

def normalize(input: Tensor, p: int = 2, dim: int = 1, eps: float = 1e-12) -> Tensor

Parameters

  • input (Tensor): The input tensor to be normalized.

  • p (int, optional): The exponent value in the norm formulation. Default is 2 (Euclidean norm).

  • dim (int, optional): The dimension along which to normalize. Default is 1.

  • eps (float, optional): A small value to avoid division by zero. Default is 1e-12.

Returns

  • Tensor: The normalized tensor with the same shape as the input.

Mathematical Definition

For a given input tensor \(x\), the function computes the normalized tensor \(y\) as:

\[y_i = \frac{x_i}{\max(\|x\|_p, \varepsilon)}\]

where:

  • \(\|x\|_p = \left( \sum |x_i|^p \right)^{\frac{1}{p}}\)

  • \(\varepsilon\) is a small constant to prevent division by zero.

Examples

L2 Normalization along a specified dimension:

>>> import lucid
>>> input_tensor = lucid.Tensor([[3.0, 4.0], [1.0, 2.0]])
>>> output = lucid.nn.functional.normalize(input_tensor, p=2, dim=1)
>>> print(output)
Tensor([[0.6, 0.8],
        [0.4472, 0.8944]])

L1 Normalization:

>>> output = lucid.nn.functional.normalize(input_tensor, p=1, dim=1)
>>> print(output)
Tensor([[0.4286, 0.5714],
        [0.3333, 0.6667]])

Note

The function ensures numerical stability by avoiding division by zero using \(\max(\|x\|_p, \varepsilon)\).

Caution

Ensure the input tensor does not have all-zero values along the specified dimension, as it may lead to unexpected behavior despite the epsilon safeguard.