EDUCBA

EDUCBA

MENUMENU
  • Free Tutorials
  • Free Courses
  • Certification Courses
  • 600+ Courses All in One Bundle
  • Login
Home Software Development Software Development Tutorials PyTorch Tutorial PyTorch Pad
Secondary Sidebar
PyTorch Tutorial
  • PyTorch
    • PyTorch Image Classification
    • PyTorch Random
    • PyTorch Variable
    • PyTorch Activation Function
    • Python Formatted String
    • PyTorch GPU
    • PyTorch CUDA
    • PyTorch DataLoader
    • PyTorch LSTM
    • PyTorch Pad
    • PyTorch OpenCL
    • PyTorch Lightning
    • PyTorch SoftMax
    • PyTorch Flatten
    • PyTorch gan
    • PyTorch max
    • PyTorch pip
    • PyTorch Parameter
    • PyTorch Load Model
    • PyTorch Distributed
    • PyTorch BERT
    • PyTorch interpolate
    • PyTorch JIT
    • PyTorch expand
    • PyTorch AMD
    • PyTorch GRU
    • PyTorch rnn
    • PyTorch permute
    • PyTorch argmax
    • PyTorch SGD
    • PyTorch nn
    • PyTorch One Hot Encoding
    • PyTorch Tensors
    • What is PyTorch?
    • PyTorch MSELoss()
    • PyTorch NLLLOSS
    • PyTorch MaxPool2d
    • PyTorch Pretrained Models
    • PyTorch Squeeze
    • PyTorch Reinforcement Learning
    • PyTorch zero_grad
    • PyTorch norm
    • PyTorch VAE
    • PyTorch Early Stopping
    • PyTorch requires_grad
    • PyTorch MNIST
    • PyTorch Conv2d
    • Dataset Pytorch
    • PyTorch tanh
    • PyTorch bmm
    • PyTorch profiler
    • PyTorch unsqueeze
    • PyTorch adam
    • PyTorch backward
    • PyTorch concatenate
    • PyTorch Embedding
    • PyTorch Tensor to NumPy
    • PyTorch Normalize
    • PyTorch ReLU
    • PyTorch Autograd
    • PyTorch Transpose
    • PyTorch Object Detection
    • PyTorch Autoencoder
    • PyTorch Loss
    • PyTorch repeat
    • PyTorch gather
    • PyTorch sequential
    • PyTorch U-NET
    • PyTorch Sigmoid
    • PyTorch Neural Network
    • PyTorch Quantization
    • PyTorch Ignite
    • PyTorch Versions
    • PyTorch TensorBoard
    • PyTorch Dropout
    • PyTorch Model
    • PyTorch optimizer
    • PyTorch ResNet
    • PyTorch CNN
    • PyTorch Detach
    • Single Layer Perceptron
    • PyTorch vs Keras
    • torch.nn Module

PyTorch Pad

PyTorch Pad

Introduction to PyTorch Pad

The pyTorch pad is the function available in the torch library whose fully qualifies name containing classes and subclasses names is

torch.nn.functional.pad (inputs, padding, mode = “constant”, value = 0.0)

It is used for assigning necessary padding to the tensor. In this article, we will try to dive into the topic of PyTorch padding and let ourselves know about PyTorch pad overviews, how to use PyTorch pad, PyTorch pad sequences, PyTorch pad Parameters, PyTorch pad example, and a Conclusion about the same.

PyTorch pad overviews

The pyTorch pad is used for adding the extra padding to the sequences and the input tensors for the specified size so that the tensor can be used in neural network architecture. In the case of string values, the information is mostly provided in the natural language processing, which cannot be directly used as input to the neural network. For this, the padding is added. So that the batch can be maximized to the largest dimension value and cover the empty spaces of each patch with the padding value. Most of the time, the value of padding used is 0 (zero).

Start Your Free Software Development Course

Web development, programming languages, Software testing & others

Also, it would help if you kept in mind that when you use the CUDA backend, the pad operation will add a completely non-deterministic behavior. This behavior can not be switched off easily. You can refer to this link for additional details about the background reproducibility.

All in One Software Development Bundle(600+ Courses, 50+ projects)
Python TutorialC SharpJavaJavaScript
C Plus PlusSoftware TestingSQLKali Linux
Price
View Courses
600+ Online Courses | 50+ projects | 3000+ Hours | Verifiable Certificates | Lifetime Access
4.6 (86,560 ratings)

How to use PyTorch pad?

We can use the PyTorch pad by using the function definition specified above. Also, there are certain factors related to the padding that will help you to understand how padding will happen and how it can be used that are discussed here –

Size of padding – The padding size is the value by which we want a particular input of certain dimensions to pad. We can describe the padding size starting from the last dimension and moving further. For example, the input with dimensions of [length(padding)/2] will be padded. Let us take one example to understand its works; if you want to pad the input tensor’s last dimension, we can do so by specifying the form of the pad as (left padding, right padding). In the case of the last two dimensions of input, a tensor is to be padded; then, we can specify the padding in the form (left padding, right padding, top padding, bottom padding). Finally, for padding of the last three dimensions of the input tensor, we can specify the padding form (left padding, right padding, top padding, bottom padding, front padding, back padding).

Mode of padding – There are three padding modes: ReplicationPad2d, ReflectionPad2d, and ConstantPad2d. Reflection and replication padding is used for padding the last three dimensions of the tensor input, which is 5D size, while constant padding works for arbitrary dimensions. Reflection and replication also work when padding is done for the two final dimensions of the tensor input having a 4-dimensional size and even the single last dimension of the input tensor having a 3-dimensional size.

PyTorch pad sequences

Most of the sequences containing the text information have variable lengths. Therefore, when we use them in neural networks or architecture, we will have to add the padding for all the inputs you will provide as sequences. Usually, this padding added is 0s at the end of the batch just because the sequence length can be maximized to the length that fits all the data of the same batch.

For example, if we have the text data

It’s a beautiful day
Yes It is
Sure

Here, we can you this data for natural language processing, but in the case of neural networks, we will have to pad the input data at last by any value so that each of the batches maximizes to the length of a sequence of 4. After padding, our data will look like this –

It’s a beautiful day
Yes It is <padding>
Sure <padding> <padding> <padding>

To pad the input data, we can use the pad method in the torch library for PyTorch tensors.

PyTorch pad Parameters

We can make the use of pad function by using its syntax or definition of the function, which is –

torch. nn. functional. pad (inputs, padding, mode = “constant”, value = 0.0)

Various parameters used in the above function definition can be used by using the below-mentioned description –

Inputs – This object is of tensor form and has dimensions of n size.

Pad – It is a tuple value that consists of m elements. The size of m/2 is less than or equal to the specified input tensor’s dimension, and the value of m is always an even number.

Mode – This parameter can have a different value of mode that can be circular, reflect, replicate, and constant. By default, the value is considered constant when not specified.

Value – This is the padding value used for constant padding. By default, the considered value is 0 when not specified.

PyTorch pad example

Let us understand the implementation of the pad function with the help of one example.

Example #1

Code:

sample4DEducbaTensor = torch.empty(3, 3, 4, 2)
paddingLastDimension = (1, 1) # for each side padding
outputPaddedTensor = F.pad(sample4DEducbaTensor, paddingLastDimension, "constant", 0)  # effectively zero padding
print (outputPaddedTensor.size())

The output of the above code is as shown below –

PyTorch Pad output 1

Example #2

Code:

sample2DEducbaTensor = (1, 1, 2, 2) # padding for second last dimension by (2, 2) and last dimension by (1,1)
outputPaddedTensor = F.pad(sample4DEducbaTensor, sample2DEducbaTensor, "constant", 0)
print (outputPaddedTensor.size())

The output of the above code is as shown below –

PyTorch Pad output 2

Example #3

Code:

sample4DEducbaTensor = torch.empty(3, 3, 4, 2)
p3d = (0, 1, 2, 1, 3, 3) # padding for left, right, up, down, backward and front
outputPaddedTensor = F.pad(sample4DEducbaTensor, p3d, "constant", 0)
print (outputPaddedTensor.size())

The output of the above code is as shown below –

PyTorch Pad output 3

Conclusion

The pyTorch pad is used for adding the padding to the tensor so that it can be passed to the neural networks. By default, the value of padding is 0.

Recommended Articles

This is a guide to PyTorch Pad. Here we discuss the implementation of the pad function with the help of one example and outputs. You may also have a look at the following articles to learn more –

  1. PyTorch TensorBoard
  2. What is PyTorch?
  3. PyTorch Versions
  4. Dataset Pytorch
0 Shares
Share
Tweet
Share
Primary Sidebar
Footer
About Us
  • Blog
  • Who is EDUCBA?
  • Sign Up
  • Live Classes
  • Corporate Training
  • Certificate from Top Institutions
  • Contact Us
  • Verifiable Certificate
  • Reviews
  • Terms and Conditions
  • Privacy Policy
  •  
Apps
  • iPhone & iPad
  • Android
Resources
  • Free Courses
  • Java Tutorials
  • Python Tutorials
  • All Tutorials
Certification Courses
  • All Courses
  • Software Development Course - All in One Bundle
  • Become a Python Developer
  • Java Course
  • Become a Selenium Automation Tester
  • Become an IoT Developer
  • ASP.NET Course
  • VB.NET Course
  • PHP Course

ISO 10004:2018 & ISO 9001:2015 Certified

© 2022 - EDUCBA. ALL RIGHTS RESERVED. THE CERTIFICATION NAMES ARE THE TRADEMARKS OF THEIR RESPECTIVE OWNERS.

EDUCBA
Free Software Development Course

C# Programming, Conditional Constructs, Loops, Arrays, OOPS Concept

*Please provide your correct email id. Login details for this Free course will be emailed to you

By signing up, you agree to our Terms of Use and Privacy Policy.

EDUCBA Login

Forgot Password?

By signing up, you agree to our Terms of Use and Privacy Policy.

EDUCBA
Free Software Development Course

Web development, programming languages, Software testing & others

*Please provide your correct email id. Login details for this Free course will be emailed to you

By signing up, you agree to our Terms of Use and Privacy Policy.

EDUCBA

*Please provide your correct email id. Login details for this Free course will be emailed to you

By signing up, you agree to our Terms of Use and Privacy Policy.

Let’s Get Started

By signing up, you agree to our Terms of Use and Privacy Policy.

This website or its third-party tools use cookies, which are necessary to its functioning and required to achieve the purposes illustrated in the cookie policy. By closing this banner, scrolling this page, clicking a link or continuing to browse otherwise, you agree to our Privacy Policy

Loading . . .
Quiz
Question:

Answer:

Quiz Result
Total QuestionsCorrect AnswersWrong AnswersPercentage

Explore 1000+ varieties of Mock tests View more