EDUCBA Logo

EDUCBA

MENUMENU
  • Explore
    • EDUCBA Pro
    • PRO Bundles
    • Featured Skills
    • New & Trending
    • Fresh Entries
    • Finance
    • Data Science
    • Programming and Dev
    • Excel
    • Marketing
    • HR
    • PDP
    • VFX and Design
    • Project Management
    • Exam Prep
    • All Courses
  • Blog
  • Enterprise
  • Free Courses
  • Log in
  • Sign Up
Home Software Development Software Development Tutorials PyTorch Tutorial PyTorch ReLU
 

PyTorch ReLU

Updated April 7, 2023

PyTorch ReLU

 

 

Introduction to PyTorch ReLU

The activation function is a class in PyTorch that helps to convert linear function to non-linear and converts complex data into simple functions so that it can be solved easily. Parameters are not defined in ReLU function and hence we need not use ReLU as a module. When we have to try different activation functions together, it is better to use init as a module and use all the activation functions in the forward pass.

Watch our Demo Courses and Videos

Valuation, Hadoop, Excel, Mobile Apps, Web Development & many more.

What is PyTorch ReLU?

An activation function which is represented in the form of relu(x) = { 0 if x<0, x if x > 0} is called PyTorch ReLU. For each layer, an activation function is applied in the form of ReLU function which makes the layers as non-linear layers. Though we have several functions that function as ReLU, this is the most commonly used activation function in machine learning. Positive numbers are returned as positive and negative numbers are returned as zero with ReLU function.

How to Use PyTorch ReLU?

ReLU layers can be constructed in PyTorch easily with simple coding.

relu1 = nn.ReLU(inplace=False)

Input or output dimensions need not be specified as the function is applied based on the elements in the code. Inplace in the code explains how the function should treat the input. Inplace as true replaces the input to output in the memory. Though this helps in memory usage, this creates problems for the code being used as the input is always getting replaced as output. It is better to set in place to false as this helps to store input and output as separate storage spaces in the memory.

A container must be set as the next step where we can place the ReLU layer.

cont = nn.Sequential()

The next step is to define the convolutional layers.

begin_convol_layer = nn.Conv2d(input_channels=2, output_channels=12, kernel_size=2, stride=1, padding=1)

The module can be added to this layer as the 2nd step.

cont.add_module("Conv1", begin_convol_layer)

This should be added to the ReLU layer as well.

cont.add_module("Relu1", relu1)

With all the codes in place, we will get the output when we run these codes and this is the way to use ReLU in PyTorch.

PyTorch ReLU Parameters

The main parameters used in ReLU are weight and bias and most other parameters are noted in the layers directly. Another parameter to note is in place which says whether the input should be stored in the same place of output or not. This is optional and if it is not mentioned, ReLU considers itself the value as False where input and output is stored in separate memory space.

PyTorch ReLU Functional Element

1. Threshold – this defines the threshold of every single tensor in the system
2. Relu – here we can apply the rectified linear unit function in the form of elements. We can use relu_ instead of relu(). We also have relu6 where the element function relu can be applied directly.
3. Softmin and softmax – we have softmin function and softmax function in the code which can be applied to the system.
4. Silu – sigmoid linear function can be applied in the form of the element by using this function.
5. Batch_norm and group_norm – batch normalization and group normalization of the individual channel is applied across the batch data.
6. Instance_norm and layer_norm – in instance_norm, a data sample is considered and instance normalization is applied to the batch. Layer normalization is applied only to specifically mentioned dimensions by the user.
7. Normalize – normalization of inputs is done to the dimensions with the help of this function.

Function Element

  • Linear and bilinear – linear and bilinear transformations can be done to the data with the help of linear function.
  • Dropout – random zeroes of some elements are considered with the probability obtained from the Bernoulli distribution.
  • Embedding – lookup table is provided to check out the embeddings where a fixed dictionary with the size is provided.
  • Pdist – p-norm distance is calculated between the vectors present in the input.
  • L1 – loss – absolute value difference is taken with the help of this function.

PyTorch Linear Examples

Code:

a = nn.ReLU()
in = torch.randn(3)
out = a(in)

a = nn.ReLU()
in = torch.randn(3).unsqueeze(0)
out = torch.cat((a(in),a(-in)))

class relu(nn.Module):

    def __init__(self):
        super(relu, self).__init__()
        self.conv1 = nn.Conv2d(1, 3, 7)
        self.conv2 = nn.Conv2d(3, 23, 7)
        self.fc1 = nn.Linear(23 * 7 * 7, 220)
        self.fc2 = nn.Linear(220, 96)
        self.fc3 = nn.Linear(96, 20)

    def forward(self, a):
        a = F.max_pool2d(F.relu(self.conv1(a)), (3, 3))
        a = F.max_pool2d(F.relu(self.conv2(a)), 3)
        a = torch.flatten(a, 1) 
        a = F.relu(self.fc1(a))
        a = F.relu(self.fc2(a))
        a = self.fc3(a)
        return a


relu = Relu()
print(relu)

def __init__(self, in_size, num_channels, ngf, num_layers, activation='tanh'):
    super(ImageDecoder, self).__init__()

    ngf = ngf * (3 ** (num_layers - 3))
    layers_def = [nn.ConvTranspose2d(in_size, ngf, 6, 2, 0, bias=False),
              nn.BatchNorm2d(ngf),
              nn.ReLU(True)]

    for k in range(2, num_layers - 2):
      layers_def += [nn.ConvTranspose2d(ngf, ngf // 3, 6, 3, 1, bias=False),
                 nn.BatchNorm2d(ngf // 3),
                 nn.ReLU(True)]
      ngf = ngf // 3

    layers_def += [nn.ConvTranspose2d(ngf, num_channels, 4, 2, 1, bias=False)]
    if activation == 'tanh':
      layers_def += [nn.Tanh()]
    elif activation == 'sigmoid':
      layers_def += [nn.Sigmoid()]
    else:
      raise NotImplementedError

    self.main = nn.Sequential(*layers_def)

Difference between nn.relu() vs F.relu()

nn.Module is created with the help of nn. relu which can be added to the sequential model of the code. We cannot do the same in F.relu as it is a functional API and if needed, it can be added to the forward pass of the code.

An output layer is taken as input in F.relu which does not have a hidden layer and all the negative values are converted to 0 or considered as an output. Nn.relu does the same operation but we have to initialize the method with nn. relu and use it in the forward call of the code. We don’t have any tensor state with F.relu but we have tensor with nn. relu.

Conclusion

Complex data is fixed with the help of ReLU function as linear data is converted to non-linear data. ReLU is also considered as an API with no functions and has stateless objects in place. When there are static inputs, the approach used must be standard and hence the code will be different.

Recommended Articles

We hope that this EDUCBA information on “PyTorch ReLU” was beneficial to you. You can view EDUCBA’s recommended articles for more information.

  1. What is PyTorch?
  2. PyTorch vs Keras
  3. Mxnet vs Pytorch
  4. PyTorch Versions

Primary Sidebar

Footer

Follow us!
  • EDUCBA FacebookEDUCBA TwitterEDUCBA LinkedINEDUCBA Instagram
  • EDUCBA YoutubeEDUCBA CourseraEDUCBA Udemy
APPS
EDUCBA Android AppEDUCBA iOS App
Blog
  • Blog
  • Free Tutorials
  • About us
  • Contact us
  • Log in
Courses
  • Enterprise Solutions
  • Free Courses
  • Explore Programs
  • All Courses
  • All in One Bundles
  • Sign up
Email
  • [email protected]

ISO 10004:2018 & ISO 9001:2015 Certified

© 2025 - EDUCBA. ALL RIGHTS RESERVED. THE CERTIFICATION NAMES ARE THE TRADEMARKS OF THEIR RESPECTIVE OWNERS.

EDUCBA

*Please provide your correct email id. Login details for this Free course will be emailed to you
Loading . . .
Quiz
Question:

Answer:

Quiz Result
Total QuestionsCorrect AnswersWrong AnswersPercentage

Explore 1000+ varieties of Mock tests View more

EDUCBA

*Please provide your correct email id. Login details for this Free course will be emailed to you
EDUCBA
Free Software Development Course

Web development, programming languages, Software testing & others

By continuing above step, you agree to our Terms of Use and Privacy Policy.
*Please provide your correct email id. Login details for this Free course will be emailed to you
EDUCBA

*Please provide your correct email id. Login details for this Free course will be emailed to you

EDUCBA Login

Forgot Password?

🚀 Limited Time Offer! - 🎁 ENROLL NOW