EDUCBA

EDUCBA

MENUMENU
  • Free Tutorials
  • Free Courses
  • Certification Courses
  • 600+ Courses All in One Bundle
  • Login
Home Software Development Software Development Tutorials PyTorch Tutorial PyTorch ReLU
Secondary Sidebar
PyTorch Tutorial
  • PyTorch
    • PyTorch Image Classification
    • PyTorch Random
    • PyTorch Variable
    • PyTorch Activation Function
    • Python Formatted String
    • PyTorch GPU
    • PyTorch CUDA
    • PyTorch DataLoader
    • PyTorch LSTM
    • PyTorch Pad
    • PyTorch OpenCL
    • PyTorch Lightning
    • PyTorch SoftMax
    • PyTorch Flatten
    • PyTorch gan
    • PyTorch max
    • PyTorch pip
    • PyTorch Parameter
    • PyTorch Load Model
    • PyTorch Distributed
    • PyTorch BERT
    • PyTorch interpolate
    • PyTorch JIT
    • PyTorch expand
    • PyTorch AMD
    • PyTorch GRU
    • PyTorch rnn
    • PyTorch permute
    • PyTorch argmax
    • PyTorch SGD
    • PyTorch nn
    • PyTorch One Hot Encoding
    • PyTorch Tensors
    • What is PyTorch?
    • PyTorch MSELoss()
    • PyTorch NLLLOSS
    • PyTorch MaxPool2d
    • PyTorch Pretrained Models
    • PyTorch Squeeze
    • PyTorch Reinforcement Learning
    • PyTorch zero_grad
    • PyTorch norm
    • PyTorch VAE
    • PyTorch Early Stopping
    • PyTorch requires_grad
    • PyTorch MNIST
    • PyTorch Conv2d
    • Dataset Pytorch
    • PyTorch tanh
    • PyTorch bmm
    • PyTorch profiler
    • PyTorch unsqueeze
    • PyTorch adam
    • PyTorch backward
    • PyTorch concatenate
    • PyTorch Embedding
    • PyTorch Tensor to NumPy
    • PyTorch Normalize
    • PyTorch ReLU
    • PyTorch Autograd
    • PyTorch Transpose
    • PyTorch Object Detection
    • PyTorch Autoencoder
    • PyTorch Loss
    • PyTorch repeat
    • PyTorch gather
    • PyTorch sequential
    • PyTorch U-NET
    • PyTorch Sigmoid
    • PyTorch Neural Network
    • PyTorch Quantization
    • PyTorch Ignite
    • PyTorch Versions
    • PyTorch TensorBoard
    • PyTorch Dropout
    • PyTorch Model
    • PyTorch optimizer
    • PyTorch ResNet
    • PyTorch CNN
    • PyTorch Detach
    • Single Layer Perceptron
    • PyTorch vs Keras
    • torch.nn Module

PyTorch ReLU

PyTorch ReLU

Introduction to PyTorch ReLU

The activation function is a class in PyTorch that helps to convert linear function to non-linear and converts complex data into simple functions so that it can be solved easily. Parameters are not defined in ReLU function and hence we need not use ReLU as a module. When we have to try different activation functions together, it is better to use init as a module and use all the activation functions in the forward pass.

What is PyTorch ReLU?

An activation function which is represented in the form of relu(x) = { 0 if x<0, x if x > 0} is called PyTorch ReLU. For each layer, an activation function is applied in the form of ReLU function which makes the layers as non-linear layers. Though we have several functions that function as ReLU, this is the most commonly used activation function in machine learning. Positive numbers are returned as positive and negative numbers are returned as zero with ReLU function.

How to Use PyTorch ReLU?

ReLU layers can be constructed in PyTorch easily with simple coding.

relu1 = nn.ReLU(inplace=False)

Start Your Free Software Development Course

Web development, programming languages, Software testing & others

Input or output dimensions need not be specified as the function is applied based on the elements in the code. Inplace in the code explains how the function should treat the input. Inplace as true replaces the input to output in the memory. Though this helps in memory usage, this creates problems for the code being used as the input is always getting replaced as output. It is better to set in place to false as this helps to store input and output as separate storage spaces in the memory.

A container must be set as the next step where we can place the ReLU layer.

All in One Software Development Bundle(600+ Courses, 50+ projects)
Python TutorialC SharpJavaJavaScript
C Plus PlusSoftware TestingSQLKali Linux
Price
View Courses
600+ Online Courses | 50+ projects | 3000+ Hours | Verifiable Certificates | Lifetime Access
4.6 (86,560 ratings)

cont = nn.Sequential()

The next step is to define the convolutional layers.

begin_convol_layer = nn.Conv2d(input_channels=2, output_channels=12, kernel_size=2, stride=1, padding=1)

The module can be added to this layer as the 2nd step.

cont.add_module("Conv1", begin_convol_layer)

This should be added to the ReLU layer as well.

cont.add_module("Relu1", relu1)

With all the codes in place, we will get the output when we run these codes and this is the way to use ReLU in PyTorch.

PyTorch ReLU Parameters

The main parameters used in ReLU are weight and bias and most other parameters are noted in the layers directly. Another parameter to note is in place which says whether the input should be stored in the same place of output or not. This is optional and if it is not mentioned, ReLU considers itself the value as False where input and output is stored in separate memory space.

PyTorch ReLU Functional Element

1. Threshold – this defines the threshold of every single tensor in the system
2. Relu – here we can apply the rectified linear unit function in the form of elements. We can use relu_ instead of relu(). We also have relu6 where the element function relu can be applied directly.
3. Softmin and softmax – we have softmin function and softmax function in the code which can be applied to the system.
4. Silu – sigmoid linear function can be applied in the form of the element by using this function.
5. Batch_norm and group_norm – batch normalization and group normalization of the individual channel is applied across the batch data.
6. Instance_norm and layer_norm – in instance_norm, a data sample is considered and instance normalization is applied to the batch. Layer normalization is applied only to specifically mentioned dimensions by the user.
7. Normalize – normalization of inputs is done to the dimensions with the help of this function.

Function Element

• Linear and bilinear – linear and bilinear transformations can be done to the data with the help of linear function.
• Dropout – random zeroes of some elements are considered with the probability obtained from the Bernoulli distribution.
• Embedding – lookup table is provided to check out the embeddings where a fixed dictionary with the size is provided.
• Pdist – p-norm distance is calculated between the vectors present in the input.
• L1 – loss – absolute value difference is taken with the help of this function.

PyTorch Linear Examples

a = nn.ReLU()
in = torch.randn(3)
out = a(in)
a = nn.ReLU()
in = torch.randn(3).unsqueeze(0)
out = torch.cat((a(in),a(-in)))
class relu(nn.Module):
def __init__(self):
super(relu, self).__init__()
self.conv1 = nn.Conv2d(1, 3, 7)
self.conv2 = nn.Conv2d(3, 23, 7)
self.fc1 = nn.Linear(23 * 7 * 7, 220)
self.fc2 = nn.Linear(220, 96)
self.fc3 = nn.Linear(96, 20)
def forward(self, a):
a = F.max_pool2d(F.relu(self.conv1(a)), (3, 3))
a = F.max_pool2d(F.relu(self.conv2(a)), 3)
a = torch.flatten(a, 1)
a = F.relu(self.fc1(a))
a = F.relu(self.fc2(a))
a = self.fc3(a)
return a
relu = Relu()
print(relu)
def __init__(self, in_size, num_channels, ngf, num_layers, activation='tanh'):
super(ImageDecoder, self).__init__()
ngf = ngf * (3 ** (num_layers - 3))
layers_def = [nn.ConvTranspose2d(in_size, ngf, 6, 2, 0, bias=False),
nn.BatchNorm2d(ngf),
nn.ReLU(True)] for k in range(2, num_layers - 2):
layers_def += [nn.ConvTranspose2d(ngf, ngf // 3, 6, 3, 1, bias=False),
nn.BatchNorm2d(ngf // 3),
nn.ReLU(True)] ngf = ngf // 3
layers_def += [nn.ConvTranspose2d(ngf, num_channels, 4, 2, 1, bias=False)] if activation == 'tanh':
layers_def += [nn.Tanh()] elif activation == 'sigmoid':
layers_def += [nn.Sigmoid()] else:
raise NotImplementedError
self.main = nn.Sequential(*layers_def)

Difference between nn.relu() vs F.relu()

nn.Module is created with the help of nn. relu which can be added to the sequential model of the code. We cannot do the same in F.relu as it is a functional API and if needed, it can be added to the forward pass of the code.

An output layer is taken as input in F.relu which does not have a hidden layer and all the negative values are converted to 0 or considered as an output. Nn.relu does the same operation but we have to initialize the method with nn. relu and use it in the forward call of the code. We don’t have any tensor state with F.relu but we have tensor with nn. relu.

Conclusion

Complex data is fixed with the help of ReLU function as linear data is converted to non-linear data. ReLU is also considered as an API with no functions and has stateless objects in place. When there are static inputs, the approach used must be standard and hence the code will be different.

Recommended Articles

This is a guide to PyTorch ReLU. Here we discuss the Introduction, What is PyTorch ReLU, How to use PyTorch ReLU, examples with code respectively. You may also have a look at the following articles to learn more –

  1. What is PyTorch?
  2. PyTorch vs Keras
  3. Mxnet vs Pytorch
  4. PyTorch Versions
Popular Course in this category
Machine Learning Training (20 Courses, 29+ Projects)
  19 Online Courses |  29 Hands-on Projects |  178+ Hours |  Verifiable Certificate of Completion
4.7
Price

View Course
0 Shares
Share
Tweet
Share
Primary Sidebar
Footer
About Us
  • Blog
  • Who is EDUCBA?
  • Sign Up
  • Live Classes
  • Corporate Training
  • Certificate from Top Institutions
  • Contact Us
  • Verifiable Certificate
  • Reviews
  • Terms and Conditions
  • Privacy Policy
  •  
Apps
  • iPhone & iPad
  • Android
Resources
  • Free Courses
  • Java Tutorials
  • Python Tutorials
  • All Tutorials
Certification Courses
  • All Courses
  • Software Development Course - All in One Bundle
  • Become a Python Developer
  • Java Course
  • Become a Selenium Automation Tester
  • Become an IoT Developer
  • ASP.NET Course
  • VB.NET Course
  • PHP Course

ISO 10004:2018 & ISO 9001:2015 Certified

© 2022 - EDUCBA. ALL RIGHTS RESERVED. THE CERTIFICATION NAMES ARE THE TRADEMARKS OF THEIR RESPECTIVE OWNERS.

EDUCBA
Free Software Development Course

C# Programming, Conditional Constructs, Loops, Arrays, OOPS Concept

*Please provide your correct email id. Login details for this Free course will be emailed to you

By signing up, you agree to our Terms of Use and Privacy Policy.

EDUCBA Login

Forgot Password?

By signing up, you agree to our Terms of Use and Privacy Policy.

EDUCBA
Free Software Development Course

Web development, programming languages, Software testing & others

*Please provide your correct email id. Login details for this Free course will be emailed to you

By signing up, you agree to our Terms of Use and Privacy Policy.

EDUCBA

*Please provide your correct email id. Login details for this Free course will be emailed to you

By signing up, you agree to our Terms of Use and Privacy Policy.

Let’s Get Started

By signing up, you agree to our Terms of Use and Privacy Policy.

This website or its third-party tools use cookies, which are necessary to its functioning and required to achieve the purposes illustrated in the cookie policy. By closing this banner, scrolling this page, clicking a link or continuing to browse otherwise, you agree to our Privacy Policy

Loading . . .
Quiz
Question:

Answer:

Quiz Result
Total QuestionsCorrect AnswersWrong AnswersPercentage

Explore 1000+ varieties of Mock tests View more