nn.HardSwish¶
- class lucid.nn.HardSwish¶
The HardSwish module applies the HardSwish activation function to the input tensor. The HardSwish function is defined as:
Where \(\text{HardSigmoid}(\mathbf{x})\) is a piecewise linear approximation of the sigmoid function:
The HardSwish activation function is computationally efficient and is commonly used in lightweight neural network architectures.
Class Signature¶
class lucid.nn.HardSwish()
Forward Calculation¶
The HardSwish module performs the following operation:
Where:
\(\mathbf{x}\) is the input tensor.
\(\mathbf{y}\) is the output tensor, calculated as the element-wise
product of the input and the hard sigmoid.
Backward Gradient Calculation¶
During backpropagation, the gradient with respect to the input is computed as:
Examples¶
Applying `HardSwish` to a single input tensor:
>>> import lucid.nn as nn
>>> input_tensor = Tensor([[-1.0, 2.0, -0.5, 3.0]], requires_grad=True) # Shape: (1, 4)
>>> hardswish = nn.HardSwish()
>>> output = hardswish(input_tensor)
>>> print(output)
Tensor([[0.0, 2.0, 0.0, 3.0]], grad=None)
# Backpropagation
>>> output.backward()
>>> print(input_tensor.grad)
[[...]] # Gradients with respect to input_tensor
Using `HardSwish` within a simple neural network:
>>> import lucid.nn as nn
>>> class SimpleHardSwishModel(nn.Module):
... def __init__(self):
... super(SimpleHardSwishModel, self).__init__()
... self.hardswish = nn.HardSwish()
...
... def forward(self, x):
... return self.hardswish(x)
...
>>> model = SimpleHardSwishModel()
>>> input_data = Tensor([[-2.0, 0.5, 1.5, -0.3]], requires_grad=True) # Shape: (1, 4)
>>> output = model(input_data)
>>> print(output)
Tensor([[0.0, 0.3, 1.2, 0.0]], grad=None)
# Backpropagation
>>> output.backward()
>>> print(input_data.grad)
[[...]] # Gradients with respect to input_data
Integrating `HardSwish` into a Neural Network Model:
>>> import lucid.nn as nn
>>> class NeuralNetwork(nn.Module):
... def __init__(self):
... super(NeuralNetwork, self).__init__()
... self.fc1 = nn.Linear(in_features=3, out_features=5)
... self.hardswish = nn.HardSwish()
... self.fc2 = nn.Linear(in_features=5, out_features=2)
...
... def forward(self, x):
... x = self.fc1(x)
... x = self.hardswish(x)
... x = self.fc2(x)
... return x
...
>>> model = NeuralNetwork()
>>> input_data = Tensor([[0.5, -1.2, 3.3]], requires_grad=True) # Shape: (1, 3)
>>> output = model(input_data)
>>> print(output)
Tensor([[...]], grad=None) # Output tensor after passing through the model
# Backpropagation
>>> output.backward()
>>> print(input_data.grad)
[[...]] # Gradients with respect to input_data