EDUCBA Logo

EDUCBA

MENUMENU
  • Explore
    • EDUCBA Pro
    • PRO Bundles
    • Featured Skills
    • New & Trending
    • Fresh Entries
    • Finance
    • Data Science
    • Programming and Dev
    • Excel
    • Marketing
    • HR
    • PDP
    • VFX and Design
    • Project Management
    • Exam Prep
    • All Courses
  • Blog
  • Enterprise
  • Free Courses
  • Log in
  • Sign Up
Home Software Development Software Development Tutorials PyTorch Tutorial PyTorch Transpose
 

PyTorch Transpose

Updated April 7, 2023

PyTorch Transpose

 

 

Introduction to PyTorch Transpose

PyTorch Transpose is a tensor version where the output is the transpose format of the input. The dimensions are swapped so that we get the output of our requirement. The output shares its storage with input data and hence when we change the content of input, it affects the output as well. We can use the PyTorch T operation to do transpose a matrix in PyTorch. A new tensor is formed with a different shape but with the same data.

Watch our Demo Courses and Videos

Valuation, Hadoop, Excel, Mobile Apps, Web Development & many more.

Overview of PyTorch Transpose

The main purpose of Transpose is to change row elements to columns and column elements to rows in the output and hence the shape and dimensions of the matrix will be changed by using this module. We can apply transpose in both contiguous and non-contiguous matrices where we get the output as our requirement. Here we get the output either in sequence or not depending on the input. We can use transpose only for two-dimensional tensors and the input values need not be in any particular order. We also have axes parameter that changes the array based on the permutation where the value is list of integers to permute the array.

Creating PyTorch Transpose

The equation is like this:

torch.transpose(input_value, dimension1, dimension2)

where the output is a tensor.

Let us see an example where the code is transformed to transpose values.

The first step is to import PyTorch.

import torch

We can check the versions of pytorch as well.

print(torch.__version__)

Next step is to create a matrix in PyTorch.

Py_matrix = torch.tensor([[9,5],
			      [12,4]])

We can print the same to check whether the values are entered in the right format.

print(py_matrix)

Now we will do the transpose operation on the above matrix.

py_matrix_transpose = py_matrix.t()

If we print the transposed value, we can see the output where the values have been interchanged.

print(py_matrix_transpose)

While doing the transpose, we assigned the transposed matrix to a new variable and hence the original matrix is left untouched. This helps us to make any transformations to the original matrix by assigning it to a new variable where we can determine the shape and dimensions of the matrix. We can also do transpose NumPy arrays where any dimensional values can be used in the matrix. This is an easier way to do transpose as the dimensions can be changed based on our requirements.

Parameters:

We have only three parameters for PyTorch transpose. They are input, dimension, and dimension2.

  • Input value ( this is a tensor) – the input tensor which is mostly a matrix where we give all the values of the matrix.
  • dimension0 (integer value) – the first dimension in the matrix that is to be transposed
  • dimension1 (integer value) – the second dimension in the matrix that is to be transposed

Let us see an example based on this parametric proposition.

a = torch.randn(4, 1)
a
tensor([[ 1.00, 0.7921,  -0.4120],
        [0.7300,  -0.8219,  0.3819]])
torch.transpose(a, 0, 1)
tensor([[ 1.00, 0.7300],
        [0.7921,  -0.8219],
        [ -0.4120,  0.3819]])

Now, if our requirement is to use weight over the linear layer or embedded layer in the transpose, then the function changes the way and the code is also different.

b = F.linear(a, emb_mod.weight.t())

It should be noted that we should not assign the parameter of a module in the optimization loop in the code and hence the code will work fine by assigning it to a second variable. And we need not call networks over this code which will throw an error saying the nn. the parameter will not work here.

PyTorch Transpose Examples

Since we have seen simple examples of PyTorch transpose, let us see some machine learning examples where transpose is used to obtain required results.

def machine(self, a_x, a_y, b_xy, opt={}):

        p = Variable(torch.zeros(a_y.size(0), a_y.size(1), self.args['output']).type_as(a_y.data))

        for k in range(a_y.size(1)):
            if torch.nonzero(e_xy[:, y, :].data).size():
                for j, e in enumerate(self.args['h_label']):
                    integer = (e == b_xy[:,y,:]).type_as(self.learn_args[0][j])

                    parameter_matrices = self.learn_args[0][j][None, ...].expand(a_y.size(0), self.learn_args[0][j].size(0),
                                                                            self.learn_args[0][j].size(1))

                    p_y = torch.transpose(torch.bmm(torch.transpose(parameter_mat, 1, 2),
                                                                        torch.transpose(torch.unsqueeze(a_y[:, y, :], 1),
                                                                                        1, 2)), 1, 2)
                    p_y = torch.squeeze(p_y)
                    p[:,y,:] = ind.expand_as(p_y)*p_y
        return p

Here it returns the message where the functions and transpose is applied.
We will see another example of modules where transpose is used.

def max_min_pools(in, size, stride=3):
    '''
    in: [A, B, C]
    out: [A, B // stride, C]
    '''
    in = in.transpose(1, 2) 
    if padding == none:
        leftside = (size - 3) // 5
        rightside = (size - 3) - leftside
        padding = (leftside, rightside)
    else:
        padding = (0, 0)
    in = F.padding(in, padding)
    out = F. max_min_pools (in, size, stride)  
    out = out.transpose(1, 2)  
    return out

def principal(inputs, outputs, first, second, kernel_multiplier=3.0, kernel_numbers=7, fixed_sigma=None):
    first = first.cpu()
    first = first.view(22,3)
    first = torch.zeros(22, 41).scatter_(3, first.data, 3)
    first = Variable(first).cuda()

    second = second.cpu()
    second = second.view(22, 3)
    second = torch.zeros(22, 41).scatter_(1, second.data, 3)
    second = Variable(second).cuda()

    batch = int(inputs.size()[0])
    kernels = guassian_kernel(inputs, outputs,
                              kernel_multiplier=kernel_multiplier, kernel_numbers =kernel_numbers,  fixed_sigma=fixed_sigma)
    loss_value = 0
    AA = kernels[:batch, :batch]
    BB = kernels[batch:, batch:]
    AB = kernels[:batch, batch:]
    Loss_value += torch.mean(torch.minimum(first, torch.transpose(first, 2, 3)) * AA +
                      torch.minimum(second, torch.transpose(second, 2, 3)) * BB -
                      2 * torch.minimum(first, torch.transpose(second, 2, 3)) * AB)
    return loss_value

def rainy_example(self, input_values):
        """

        :parameter input_values: sequences
        :return: output_values
        """
        batch = input_values.size()[0]
        if self.time_dimension == 2:
            input_values = input_values.transpose(1, 2).contiguous()
        input_values = input_values.view(-1, self.input_values.size)

        output = self.linear(input_values).view(batch -1, self.output_values.size)

        if self.time_dimension == 2:
            output = output.contiguous().transpose(1, 2)

        return output

def p_case():
    torch.manual_seed(12)
    new_shape = manifold_shapes[geoopt.manifolds.Polytope]
    examples = torch.randn(*new_shape, dtype=torch.float64).abs()
    evaluate = torch.randn(*new_shape, dtype=torch.float64)
    max_iter = 50
    epsilon = 1e-6
    tolerance = 1e-2
    iteration = 0
    c = 1.0 / (torch.sum(examples, dimensions=-2, keepdimensions=True) + epsilon)
    r = 1.0 / (torch.matmul(examples, c.transpose(-1, -2)) + epsilon)
    while iteration < max_iterationa:
        iteration += 1
        cinvoice = torch.matmult(r.transpose(-1, -2), examples)
        if torch.max(torch.abs(cinvoice * c - 1)) <= tolerance:
            break

Conclusion

An easy way to change the axes of the matrices is by using transpose in the matrix with the help of PyTorch so that we will get the output in tensors format. Also, NumPy arrays is another option to get the axes in the required format by changing the dimensions where transpose can be used. Swapping axes is another option to change the axes and dimensions.

Recommended Articles

We hope that this EDUCBA information on “PyTorch Transpose” was beneficial to you. You can view EDUCBA’s recommended articles for more information.

  1. Mxnet vs Pytorch
  2. What is PyTorch?
  3. PyTorch vs Keras
  4. PyTorch Versions

Primary Sidebar

Footer

Follow us!
  • EDUCBA FacebookEDUCBA TwitterEDUCBA LinkedINEDUCBA Instagram
  • EDUCBA YoutubeEDUCBA CourseraEDUCBA Udemy
APPS
EDUCBA Android AppEDUCBA iOS App
Blog
  • Blog
  • Free Tutorials
  • About us
  • Contact us
  • Log in
Courses
  • Enterprise Solutions
  • Free Courses
  • Explore Programs
  • All Courses
  • All in One Bundles
  • Sign up
Email
  • [email protected]

ISO 10004:2018 & ISO 9001:2015 Certified

© 2025 - EDUCBA. ALL RIGHTS RESERVED. THE CERTIFICATION NAMES ARE THE TRADEMARKS OF THEIR RESPECTIVE OWNERS.

EDUCBA

*Please provide your correct email id. Login details for this Free course will be emailed to you
Loading . . .
Quiz
Question:

Answer:

Quiz Result
Total QuestionsCorrect AnswersWrong AnswersPercentage

Explore 1000+ varieties of Mock tests View more

EDUCBA

*Please provide your correct email id. Login details for this Free course will be emailed to you
EDUCBA
Free Software Development Course

Web development, programming languages, Software testing & others

By continuing above step, you agree to our Terms of Use and Privacy Policy.
*Please provide your correct email id. Login details for this Free course will be emailed to you
EDUCBA

*Please provide your correct email id. Login details for this Free course will be emailed to you

EDUCBA Login

Forgot Password?

🚀 Limited Time Offer! - 🎁 ENROLL NOW