EDUCBA

EDUCBA

MENUMENU
  • Free Tutorials
  • Free Courses
  • Certification Courses
  • 600+ Courses All in One Bundle
  • Login
Home Software Development Software Development Tutorials PyTorch Tutorial PyTorch interpolate
Secondary Sidebar
PyTorch Tutorial
  • PyTorch
    • PyTorch Image Classification
    • PyTorch Random
    • PyTorch Variable
    • PyTorch Activation Function
    • Python Formatted String
    • PyTorch GPU
    • PyTorch CUDA
    • PyTorch DataLoader
    • PyTorch LSTM
    • PyTorch Pad
    • PyTorch OpenCL
    • PyTorch Lightning
    • PyTorch SoftMax
    • PyTorch Flatten
    • PyTorch gan
    • PyTorch max
    • PyTorch pip
    • PyTorch Parameter
    • PyTorch Load Model
    • PyTorch Distributed
    • PyTorch BERT
    • PyTorch interpolate
    • PyTorch JIT
    • PyTorch expand
    • PyTorch AMD
    • PyTorch GRU
    • PyTorch rnn
    • PyTorch permute
    • PyTorch argmax
    • PyTorch SGD
    • PyTorch nn
    • PyTorch One Hot Encoding
    • PyTorch Tensors
    • What is PyTorch?
    • PyTorch MSELoss()
    • PyTorch NLLLOSS
    • PyTorch MaxPool2d
    • PyTorch Pretrained Models
    • PyTorch Squeeze
    • PyTorch Reinforcement Learning
    • PyTorch zero_grad
    • PyTorch norm
    • PyTorch VAE
    • PyTorch Early Stopping
    • PyTorch requires_grad
    • PyTorch MNIST
    • PyTorch Conv2d
    • Dataset Pytorch
    • PyTorch tanh
    • PyTorch bmm
    • PyTorch profiler
    • PyTorch unsqueeze
    • PyTorch adam
    • PyTorch backward
    • PyTorch concatenate
    • PyTorch Embedding
    • PyTorch Tensor to NumPy
    • PyTorch Normalize
    • PyTorch ReLU
    • PyTorch Autograd
    • PyTorch Transpose
    • PyTorch Object Detection
    • PyTorch Autoencoder
    • PyTorch Loss
    • PyTorch repeat
    • PyTorch gather
    • PyTorch sequential
    • PyTorch U-NET
    • PyTorch Sigmoid
    • PyTorch Neural Network
    • PyTorch Quantization
    • PyTorch Ignite
    • PyTorch Versions
    • PyTorch TensorBoard
    • PyTorch Dropout
    • PyTorch Model
    • PyTorch optimizer
    • PyTorch ResNet
    • PyTorch CNN
    • PyTorch Detach
    • Single Layer Perceptron
    • PyTorch vs Keras
    • torch.nn Module

PyTorch interpolate

PyTorch interpolate

Definition of PyTorch interpolate

In deep learning, we have different types of functionality to the user, in which that interpolate is one the functionality that the PyTorch provides. By using interpolate, we can set the input according to our requirement, and the different used interpolate algorithm depends on the setting of different parameter modes. Interpolate supports the 1D, 2D, and 3D cloud data such as vector data, different types of images JPG, PNG, etc. In other words, we can say that interpolation applies to temporal or volumetric dimensions. So as per our requirement, we can create the output by using a scale factor.

What is PyTorch interpolate?

As per the given size or scale_factor boundary to down/up example the information,

It upholds the current testing information of impermanent (1D, like vector information), spatial (2D, for example, jpg, png, and other picture information), and volumetric (3D, for example, point cloud information) types as information. The info information design is minibatch x channels x [optional depth] x [optional height] x width. For a piece of impermanent information, anticipate the contribution of the 3D tensor, in particular, minibatch x channels x width. For spatial information, we anticipate the contribution of the 4D tensor, in particular, minibatch x channels x stature x width. For the volumetric information, we anticipate the contribution of the 5D tensor, for example, minibatch x channels x profundity x stature x width. The calculation can be utilized for the three-dimensional (3D) and cubic just (3D) calculations.

How to use PyTorch interpolate?

Now let’s see how we can use the interpolate function in PyTorch as follows.

Start Your Free Software Development Course

Web development, programming languages, Software testing & others

Syntax:

torch.nn.functional.interpolate(specified input, o_size = None, scale_factor =None, mode_function ='nearest', align = None)

All in One Software Development Bundle(600+ Courses, 50+ projects)
Python TutorialC SharpJavaJavaScript
C Plus PlusSoftware TestingSQLKali Linux
Price
View Courses
600+ Online Courses | 50+ projects | 3000+ Hours | Verifiable Certificates | Lifetime Access
4.6 (86,560 ratings)

Explanation

In the above syntax, we use an interpolate function with different parameters as follows.

  • specified input: Specified input means input tensor.
  • o_size: o_size means the output size of the tensor that depends on the user requirement.
  • scale_factor: It is utilized to indicate how often the result is input. If you enter TUPLE, it is also important to set the TUPLE type.
  • mode_function: It is used to specify the samples, and it depends on the requirement.
  • align: In math, we trust that the pixels of information and result are square, not a few focuses. Therefore, whenever set to True, the info and result tubes are adjusted by the middle place of their rakish pixels, in this way holding the qualities at the precise pixels. Assuming set to bogus, the info and result tubes are adjusted by the rakish pixels of their precise pixels, and the interjection utilizes the limit worth of the boundary. When scale_factor stays unaltered, make this activity freely of the information size. Only utilize the calculation for use linear, ‘bilinear,’ ‘bilinear’ or You can utilize when ‘Trilinear.’ The default setting is false. With align = True, the directly introducing modes (straight, bilinear, and trilinear) don’t adjust the result and information pixels; consequently, the result esteems can rely upon the info size. This was the default conduct for these modes up to adaptation 0.3.1. From that point forward, the default conduct is align = False. See Up sample for substantial models on how this influences the results.

PyTorch interpolate function

For introduction in PyTorch, this open issue calls for more insertion highlights. There is currently an nn. functional.grid_sample() highlight; however, basically, at first, this didn’t appear as though what I wanted (yet we’ll return to this later).

Specifically, I needed to take a picture, W x H x C, and test it commonly at various arbitrary areas. Note likewise that this is not quite the same as upsampling, which thoroughly tests and furthermore doesn’t give us adaptability with the accuracy of inspecting.
When scale_factor is indicated, in the event that recompute_scale_factor=True, scale_factor is utilized to register the output_size, which will be utilized to induce new scales for the introduction. The default scale_factor is changed in 1.6.0.

The interpolate 1.1k capacity requires the contribution to be in genuine BCHW design and not CHW as the past would be. Hence, supplant the past by x = torch. rand(1, 2, 10, 10).cuda(), or call the unsqueeze(0) work on x to add a cluster aspect (since it’s just 1 picture, it will be a group size of 1; however, the capacity as yet needs it)

PyTorch interpolate Linear

Now let’s see what is interpolated linearly as follows.

A direct interjection is a strategy for working out middle-of-the-road information between known qualities by adroitly defining a straight boundary between two nearby known qualities. An inserted esteem is any point along that line. You utilize direct addition to, for instance, draw diagrams or vivify between keyframes.

PyTorch interpolate Examples

Now let’s see the different examples of interpolation for better understanding as follows.

import torch
import torch.nn.functional as Fun
X = 2
Y = 4
Z = A = 8
B = torch.randn((X, Y, Z, A))
b_usample = Fun.interpolate(B, [Z*3, A*3], mode='bilinear', align_corners=True)
b_mod = B.clone()
b_mod[:, 0] *= 2000
b_mod_usample = Fun.interpolate(b_mod, [Z*3, A*3], mode='bilinear', align_corners=True)
print(torch.isclose(b_usample[:,0], b_mod_usample[:,0]).all())
print(torch.isclose(b_usample[:,1], b_mod_usample[:,1]).all())
print(torch.isclose(b_usample[:,2], b_mod_usample[:,2]).all())
print(torch.isclose(b_usample[:,3], b_mod_usample[:,3]).all())

Explanation

In the above example, we try to implement the interpolate function in PyTorch. Here first, we created a random tensor with different parameters, as shown in the above code. After that, we use the interpolate function as shown. Finally, we illustrated the final output of the above implementation by using the following screenshot as follows.

2

Now let’s see another example of interpolate function as follows.

import torch
import torch.nn.functional as Fun
A = torch.randn(6, 26, 151)
output = Fun.interpolate(A, size=150)
print(output.shape)

Explanation

In the above example, we first import the required packages; after that, we create a tensor using the randn () function as shown. After that, we use interpolate () function. Finally, we illustrated the final output of the above implementation by using the following screenshot as follows.

1

Conclusion

We hope from this article you learn more about the PyTorch interpolate. From the above article, we have taken in the essential idea of the PyTorch interpolate, and we also see the representation and example of the PyTorch interpolate. Furthermore, we learned how and when we use the PyTorch interpolate from this article.

Recommended Articles

This is a guide to PyTorch interpolate. Here we discuss the definition, What is PyTorch interpolate, How to use PyTorch interpolate with its Function?. You may also have a look at the following articles to learn more –

  1. Dataset Pytorch
  2. PyTorch Conv2d
  3. What is PyTorch?
  4. PyTorch Versions
0 Shares
Share
Tweet
Share
Primary Sidebar
Footer
About Us
  • Blog
  • Who is EDUCBA?
  • Sign Up
  • Live Classes
  • Corporate Training
  • Certificate from Top Institutions
  • Contact Us
  • Verifiable Certificate
  • Reviews
  • Terms and Conditions
  • Privacy Policy
  •  
Apps
  • iPhone & iPad
  • Android
Resources
  • Free Courses
  • Java Tutorials
  • Python Tutorials
  • All Tutorials
Certification Courses
  • All Courses
  • Software Development Course - All in One Bundle
  • Become a Python Developer
  • Java Course
  • Become a Selenium Automation Tester
  • Become an IoT Developer
  • ASP.NET Course
  • VB.NET Course
  • PHP Course

ISO 10004:2018 & ISO 9001:2015 Certified

© 2022 - EDUCBA. ALL RIGHTS RESERVED. THE CERTIFICATION NAMES ARE THE TRADEMARKS OF THEIR RESPECTIVE OWNERS.

EDUCBA
Free Software Development Course

C# Programming, Conditional Constructs, Loops, Arrays, OOPS Concept

*Please provide your correct email id. Login details for this Free course will be emailed to you

By signing up, you agree to our Terms of Use and Privacy Policy.

EDUCBA Login

Forgot Password?

By signing up, you agree to our Terms of Use and Privacy Policy.

EDUCBA
Free Software Development Course

Web development, programming languages, Software testing & others

*Please provide your correct email id. Login details for this Free course will be emailed to you

By signing up, you agree to our Terms of Use and Privacy Policy.

EDUCBA

*Please provide your correct email id. Login details for this Free course will be emailed to you

By signing up, you agree to our Terms of Use and Privacy Policy.

Let’s Get Started

By signing up, you agree to our Terms of Use and Privacy Policy.

This website or its third-party tools use cookies, which are necessary to its functioning and required to achieve the purposes illustrated in the cookie policy. By closing this banner, scrolling this page, clicking a link or continuing to browse otherwise, you agree to our Privacy Policy

Loading . . .
Quiz
Question:

Answer:

Quiz Result
Total QuestionsCorrect AnswersWrong AnswersPercentage

Explore 1000+ varieties of Mock tests View more