EDUCBA

EDUCBA

MENUMENU
  • Free Tutorials
  • Free Courses
  • Certification Courses
  • 600+ Courses All in One Bundle
  • Login
Home Software Development Software Development Tutorials PyTorch Tutorial PyTorch Loss
Secondary Sidebar
PyTorch Tutorial
  • PyTorch
    • PyTorch Image Classification
    • PyTorch Random
    • PyTorch Variable
    • PyTorch Activation Function
    • Python Formatted String
    • PyTorch GPU
    • PyTorch CUDA
    • PyTorch DataLoader
    • PyTorch LSTM
    • PyTorch Pad
    • PyTorch OpenCL
    • PyTorch Lightning
    • PyTorch SoftMax
    • PyTorch Flatten
    • PyTorch gan
    • PyTorch max
    • PyTorch pip
    • PyTorch Parameter
    • PyTorch Load Model
    • PyTorch Distributed
    • PyTorch BERT
    • PyTorch interpolate
    • PyTorch JIT
    • PyTorch expand
    • PyTorch AMD
    • PyTorch GRU
    • PyTorch rnn
    • PyTorch permute
    • PyTorch argmax
    • PyTorch SGD
    • PyTorch nn
    • PyTorch One Hot Encoding
    • PyTorch Tensors
    • What is PyTorch?
    • PyTorch MSELoss()
    • PyTorch NLLLOSS
    • PyTorch MaxPool2d
    • PyTorch Pretrained Models
    • PyTorch Squeeze
    • PyTorch Reinforcement Learning
    • PyTorch zero_grad
    • PyTorch norm
    • PyTorch VAE
    • PyTorch Early Stopping
    • PyTorch requires_grad
    • PyTorch MNIST
    • PyTorch Conv2d
    • Dataset Pytorch
    • PyTorch tanh
    • PyTorch bmm
    • PyTorch profiler
    • PyTorch unsqueeze
    • PyTorch adam
    • PyTorch backward
    • PyTorch concatenate
    • PyTorch Embedding
    • PyTorch Tensor to NumPy
    • PyTorch Normalize
    • PyTorch ReLU
    • PyTorch Autograd
    • PyTorch Transpose
    • PyTorch Object Detection
    • PyTorch Autoencoder
    • PyTorch Loss
    • PyTorch repeat
    • PyTorch gather
    • PyTorch sequential
    • PyTorch U-NET
    • PyTorch Sigmoid
    • PyTorch Neural Network
    • PyTorch Quantization
    • PyTorch Ignite
    • PyTorch Versions
    • PyTorch TensorBoard
    • PyTorch Dropout
    • PyTorch Model
    • PyTorch optimizer
    • PyTorch ResNet
    • PyTorch CNN
    • PyTorch Detach
    • Single Layer Perceptron
    • PyTorch vs Keras
    • torch.nn Module

PyTorch Loss

PyTorch Loss

Introduction to PyTorch Loss

Basically, Pytorch provides the different functions, in which that loss is one of the functions that are provided by the Pytorch. In deep learning, we need expected outcomes but sometimes we get unexpected outcomes so at that time we need to guess the gap between the expected and predicted outcomes. At that time, we can use the loss function. Normally the Pytorch loss function is used to determine the gap between the prediction data and provided data values. In another word, we can say that the loss function provides the information about the algorithm model that means how it is far from the expected result or penalty of the algorithm.

What is PyTorch loss?

Loss functions are utilized to check the blunder between the expected yield and they give target esteem. A misfortune works let us know how far the calculation model is from understanding the normal result. The word ‘Loss’ signifies the punishment that the model gets for neglecting to yield the ideal outcomes.

Loss functions or cost work is a capacity that maps an occasion or upsides of at least one factor onto a genuine number naturally addressing a few “costs” related to the occasion. An advancement issue tries to limit a loss function. A genuine capacity is either a loss function or its negative (in explicit areas, differently called an awarded work, a benefit work, a utility capacity, a wellness work, and so forth), in which case it is to be expanded

Now let’s see the classification of loss function as follows.

Start Your Free Software Development Course

Web development, programming languages, Software testing & others

All in One Software Development Bundle(600+ Courses, 50+ projects)
Python TutorialC SharpJavaJavaScript
C Plus PlusSoftware TestingSQLKali Linux
Price
View Courses
600+ Online Courses | 50+ projects | 3000+ Hours | Verifiable Certificates | Lifetime Access
4.6 (86,697 ratings)

1. Regression loss function: It is one of the classifications of the loss function and it is used when we need to predict the continuous value, for example, we can consider age.

2. Classification loss function: It is used when we need to predict the final value of the model at that time we can use the classification loss function. For example, email.

3. Ranking loss function: If we need to calculate the relative distance between the inputs at that time we can use this classification. For example, ranking of the product according to their relevance.

pytorch loss two parameters

Now let’s see the two parameters of loss function as follows.

1. Predicted Result
2. Final Value (Target)

Explanation

These functions will decide your model’s presentation by contrasting its anticipated yield and the normal yield. Assuming the deviation between the predicted result and the final value is extremely enormous; the misfortune will be exceptionally high.

If the deviation is little or the qualities are almost indistinguishable, it’ll yield an extremely low loss value. Accordingly, you really want to utilize a specified loss function that can punish a model appropriately when it is prepared on the given dataset.
Loss function changes depending on the issue articulation that your calculation is attempting to address.

How to add PyTorch loss?

Now let’s see how we can add the loss function with an example as follows. First, we need to import the packages that we require as follows.

import torch
import torch.nn as nn
After that, we need to define the loss function as follows.
Loss = nn, specified function name()

Explanation

We can use the above-mentioned syntax to add the loss function into the model to identify the gap between predicted outcomes and target the outcome.

import torch
import torch.nn as nn
data_input = torch.randn(4, 3, requires_grad=True)
t_value = torch.randn(4, 3)
loss_f = nn.L1Loss()
result = loss_f(data_input, t_value)
result.backward()
print('input values: ', data_input)
print('target values: ', t_value)
print('Final Outcomes: ', result)

Explanation

In the above example, we try to implement the loss function in PyTorch as shown. The final result of the above program we illustrated by using the following screenshot as follows.

1-1

Types of loss functions in PyTorch

Now let’s see different types of the loss function in Pytorch with examples for better understanding as follows.

1. Mean Absolute Error (L1 Loss Function): It is also called as L1 loss function, it is used to calculate the average sum of the absolute difference between the predicted outcomes and actual outcome. It is also used to check the size of the error. The L1 expression is shown below.

Loss (A, B) = |A – B|

Where,

A is used to represent the actual outcome and B is used to represent the predicted outcome.

2. Mean Squared Error Loss Function: It is also called as L2 loss function and it is used to calculate the average of the squared differences between the predicted outcome and actual outcome. The outcome of this function is always positive. The expression of L2 is shown below as follows.

Loss (A, B) = (A – B)2
Where,

A is used to represent the actual outcome and B is used to represent the predicted outcome.

Example

import torch
import torch.nn as nn
data_input = torch.randn(4, 3, requires_grad=True)
t_value = torch.randn(4, 3)
loss_f = nn.MSELoss()
result = loss_f(data_input, t_value)
result.backward()
print('input values: ', data_input)
print('target values: ', t_value)
print('Final Outcomes: ', result)

Explanation

The final result of the above program we illustrated by using the following screenshot as follows.

1-2

3. Negative Log-Likelihood Loss Function: It is used to compute the normalized exponential function for every layer that is the Softmax function. It also includes the Nll loss model to make the correct prediction.

Example

import torch
import torch.nn as nn
data_input = torch.randn(3, 5, requires_grad=True)
t_value = torch.tensor([2, 1, 3])
loss_f = nn.LogSoftmax(dim=1)
nll_f = nn.NLLLoss()
result = nll_f(loss_f(data_input), t_value)
result.backward()
print('input values: ', data_input)
print('target values: ', t_value)
print('Final Outcomes: ', result)

Explanation

The final result of the above program we illustrated by using the following screenshot as follows.

1-3

4. Cross-Entropy Loss Function: It is used to derive the result between two different probabilities for any random variables. The expression of this function is as follows.

Loss (A, B) = - ∑ A log B

Where,

A is used to represent the actual outcome and B is used to represent the predicted outcome.

5. Hinge Embedding Loss Function: By using this function we can calculate the loss between the tensor and labels. Hinge function gives more error a value when the difference exists between the predicted outcome and actual outcome.

6. Margin Ranking Loss Function: It is used to derive the criterion of the predicted outcome between the inputs.

7. Triplet Margin Loss Function: It is used to calculate the triplet loss of the model.

Conclusion

We hope from this article you learn more about the Pytorch loss. From the above article, we have taken in the essential idea of the Pytorch loss and we also see the representation and example of Pytorch loss. From this article, we learned how and when we use the Pytorch loss.

Recommended Articles

This is a guide to PyTorch Loss. Here we discuss the Definition, What is PyTorch loss, How to add PyTorch loss, types. You may also have a look at the following articles to learn more –

  1. Mxnet vs Pytorch
  2. What is PyTorch?
  3. PyTorch vs Keras
  4. PyTorch Versions
Popular Course in this category
Machine Learning Training (20 Courses, 29+ Projects)
  19 Online Courses |  29 Hands-on Projects |  178+ Hours |  Verifiable Certificate of Completion
4.7
Price

View Course
0 Shares
Share
Tweet
Share
Primary Sidebar
Footer
About Us
  • Blog
  • Who is EDUCBA?
  • Sign Up
  • Live Classes
  • Corporate Training
  • Certificate from Top Institutions
  • Contact Us
  • Verifiable Certificate
  • Reviews
  • Terms and Conditions
  • Privacy Policy
  •  
Apps
  • iPhone & iPad
  • Android
Resources
  • Free Courses
  • Java Tutorials
  • Python Tutorials
  • All Tutorials
Certification Courses
  • All Courses
  • Software Development Course - All in One Bundle
  • Become a Python Developer
  • Java Course
  • Become a Selenium Automation Tester
  • Become an IoT Developer
  • ASP.NET Course
  • VB.NET Course
  • PHP Course

ISO 10004:2018 & ISO 9001:2015 Certified

© 2022 - EDUCBA. ALL RIGHTS RESERVED. THE CERTIFICATION NAMES ARE THE TRADEMARKS OF THEIR RESPECTIVE OWNERS.

EDUCBA
Free Software Development Course

C# Programming, Conditional Constructs, Loops, Arrays, OOPS Concept

*Please provide your correct email id. Login details for this Free course will be emailed to you

By signing up, you agree to our Terms of Use and Privacy Policy.

EDUCBA Login

Forgot Password?

By signing up, you agree to our Terms of Use and Privacy Policy.

EDUCBA
Free Software Development Course

Web development, programming languages, Software testing & others

*Please provide your correct email id. Login details for this Free course will be emailed to you

By signing up, you agree to our Terms of Use and Privacy Policy.

EDUCBA

*Please provide your correct email id. Login details for this Free course will be emailed to you

By signing up, you agree to our Terms of Use and Privacy Policy.

Let’s Get Started

By signing up, you agree to our Terms of Use and Privacy Policy.

This website or its third-party tools use cookies, which are necessary to its functioning and required to achieve the purposes illustrated in the cookie policy. By closing this banner, scrolling this page, clicking a link or continuing to browse otherwise, you agree to our Privacy Policy

Loading . . .
Quiz
Question:

Answer:

Quiz Result
Total QuestionsCorrect AnswersWrong AnswersPercentage

Explore 1000+ varieties of Mock tests View more