EDUCBA

EDUCBA

MENUMENU
  • Free Tutorials
  • Free Courses
  • Certification Courses
  • 360+ Courses All in One Bundle
  • Login

PyTorch NLLLOSS

Home » Data Science » Data Science Tutorials » Machine Learning Tutorial » PyTorch NLLLOSS

PyTorch NLLLOSS

Introduction to PyTorch NLLLOSS

PyTorch NLLLOSS has its long-form as Negative Log-Likelihood Loss which is the metric that is extensively used while dealing with C classes whenever the training of the C classes is being performed. In this article, we will have a detailed overview of What is PyTorch NLLLOSS, how to use PyTorch NLLLOSS, PyTorch NLLLOSS parameters, corresponding related examples, and finally, concludes our statement.

What is PyTorch NLLLOSS?

PyTorch NLLLOSS is the metric used extensively in training the models especially in the case where we have our training set in an unbalanced condition. We can also provide one of the optional arguments named weight which must have its value specified as one dimensional tensor for each of the individual classes for setting the corresponding weights to them.

Start Your Free Data Science Course

Hadoop, Data Science, Statistics & others

Log probabilities should be added in each of the class while providing the input in case of forward call. The size of the tensor input is expected to be either (minimum batch, C, d1, d2, …., dn) or (minimum batch, C) where the value of n is expected to be greater than or equal to 1 when the tensor is of n dimensions. Whenever we have the inputs of a higher dimension, we should go for using the first size. For example when we are dealing with two-dimensional images and we need to compute the value of NLL loss of PyTorch per pixel of the image.

When the mode of reduction is set to the value None that means we are about to calculate the unreduced loss, we can describe the same with the following formula –

Loss(x,y) = L = {l1,l2,l3,….ln}^T, ln = -WynXnyn, wc = weight[c].1 where the value of c is not equal to the ignore index value. In the above syntax, the value of x is the input, N is the value of the size of the batch, y is the target value while W is the value of weight. When the value of the reduction is not set to any which by default means mean then the loss is calculated as shown by using the below formula –

Loss(x,y)= { ∑ N to n=1 1/ ∑ n=1to N wyn * ln when the value of reduction is mean of simply loss is the {∑ N to n=1 ln when the value of reduction is set to sum.

Popular Course in this category
Sale
Machine Learning Training (19 Courses, 29+ Projects)19 Online Courses | 29 Hands-on Projects | 178+ Hours | Verifiable Certificate of Completion | Lifetime Access
4.7 (13,865 ratings)
Course Price

View Course

Related Courses
Deep Learning Training (16 Courses, 24+ Projects)Artificial Intelligence Training (5 Courses, 2 Project)

How to use PyTorch NLLLOSS?

Log probabilities are obtained by adding a new layer of Log soft max simply as the last layer of the neural network while making use in neural networks. If you want to avoid the addition of a new layer for this then you are free to make use of CrossEntropyLoss. [0, N-1 ] is the range of the class index between which we can expect the loss as a target.

Here, N will be equal to the value of the number of classes. The acceptance of the class index is also acceptable in the loss provided if we specify the ignore index value. In this scenario, also allows you to specify the class index having the value that doesn’t belong to the range of the class.

PyTorch NLLLOSS Parameters

Various parameters described and used in the above syntax of PyTorch NLLLOSS are described in detail here –

  • Size average – This is the optional Boolean value and is deprecated for usage. Inside a particular batch, the default behavior is the calculation of the average of each loss of the loss element inside the batch. In case of certain losses, each of the sample values may contain multiple associated elements. When the value of the size average parameter is set to false then inside each of the mini-batch, the value of losses is being summed up. By default, when not specified the value is set to true and this value is ignored when the value of the reducing parameter is set to false.
  • Weight – This is the optional parameter whose value should eb in tensor format. Each of the classes is assigned manually the rescaling weight. Whenever specified, the value should be a tensor of N size. By default, the value of that tensor is considered to contain all the ones in it.
  • Ignore index – This is also an optional parameter having the integer value. This is the value of target that is completely ignored and also not being considered in the gradient value of the input. When the value of the parameter size average is set to true then the average of loss is being made considering all the non ignored target values.
  • Reduction – This is the optional string parameter used to specify the reduction that needs to be applied in the output value when the type of output is either mean, none or sum. The value of mean corresponds to the specification that the weighted mean of the output is considered, none means that none of the reduction is applicable, sum corresponds to the specification that the values of the output will be summed up. The parameters reduce and size average are completely deprecated and in the near future if we try to specify any of the values of those two parameters then it will be overridden by the reduction. When not specified the value of this parameter is treated to be mean.
  • Reduce – This is the optional string value which is deprecated for now. The default value when not specified set to true which gives the average of loss or observations being summed up for each of the batch considering the value of size average. If the value of this parameter is set to false then the loss is returned per element of the batch and the value of the size average parameter is completely ignored.

Examples

Let us consider one example as shown in the below code –

sampleEducbaModel = neuralNetwork.LogSoftmax(dim=1)
sampleObtainedLoss = neuralNetwork.NLLLOSS()
# Specification of the size of sampleInput is 3 * 5
sampleInput = torch.randn(3, 5, requires_grad=True)
# individual element should contain the value that lies in the range of 0 to C (0 inclusive)
sampleTarget = torch.tensor([1, 0, 4])
achievedOutput = sampleObtainedLoss(sampleEducbaModel(sampleInput), sampleTarget)
achievedOutput.backward()
print ('Retrieved Result: ', achievedOutput)

The execution of the above program gives the output as shown below –

9

Conclusion

PyTorch NLLLOSS is used for calculating the negative log-likelihood function which can be used only for the models that have softmax function applicable for the activation layer of output.

Recommended Articles

This is a guide to PyTorch NLLLOSS. Here we discuss the Introduction, What is PyTorch NLLLOSS?, How to use PyTorch NLLLOSS?, Example, and code. You may also have a look at the following articles to learn more –

  1. What is PyTorch?
  2. PyTorch Conv2d
  3. Dataset Pytorch
  4. PyTorch Versions

All in One Data Science Bundle (360+ Courses, 50+ projects)

360+ Online Courses

50+ projects

1500+ Hours

Verifiable Certificates

Lifetime Access

Learn More

0 Shares
Share
Tweet
Share
Primary Sidebar
Machine Learning Tutorial
  • PyTorch
    • PyTorch Tensors
    • What is PyTorch?
    • PyTorch MSELoss()
    • PyTorch NLLLOSS
    • PyTorch MaxPool2d
    • PyTorch Pretrained Models
    • PyTorch Squeeze
    • PyTorch Reinforcement Learning
    • PyTorch zero_grad
    • PyTorch norm
    • PyTorch VAE
    • PyTorch Early Stopping
    • PyTorch requires_grad
    • PyTorch MNIST
    • PyTorch Conv2d
    • Dataset Pytorch
    • PyTorch tanh
    • PyTorch bmm
    • PyTorch profiler
    • PyTorch unsqueeze
    • PyTorch adam
    • PyTorch backward
    • PyTorch concatenate
    • PyTorch Embedding
    • PyTorch Tensor to NumPy
    • PyTorch Normalize
    • PyTorch ReLU
    • PyTorch Autograd
    • PyTorch Transpose
    • PyTorch Object Detection
    • PyTorch Autoencoder
    • PyTorch Loss
    • PyTorch repeat
    • PyTorch gather
    • PyTorch sequential
    • PyTorch U-NET
    • PyTorch Sigmoid
    • PyTorch Neural Network
    • PyTorch Quantization
    • PyTorch Ignite
    • PyTorch Versions
    • PyTorch TensorBoard
    • PyTorch Dropout
    • PyTorch Model
    • PyTorch optimizer
    • PyTorch ResNet
    • PyTorch CNN
    • PyTorch Detach
    • Single Layer Perceptron
    • PyTorch vs Keras
    • torch.nn Module
  • Basic
    • Introduction To Machine Learning
    • What is Machine Learning?
    • Uses of Machine Learning
    • Applications of Machine Learning
    • Naive Bayes in Machine Learning
    • Dataset Labelling
    • DataSet Example
    • Dataset ZFS
    • Careers in Machine Learning
    • What is Machine Cycle?
    • Machine Learning Feature
    • Machine Learning Programming Languages
    • What is Kernel in Machine Learning
    • Machine Learning Tools
    • Machine Learning Models
    • Machine Learning Platform
    • Machine Learning Libraries
    • Machine Learning Life Cycle
    • Machine Learning System
    • Machine Learning Datasets
    • Top 7 Useful Benefits Of Machine Learning Certifications
    • Machine Learning Python vs R
    • Optimization for Machine Learning
    • Types of Machine Learning
    • Machine Learning Methods
    • Machine Learning Software
    • Machine Learning Techniques
    • Machine Learning Feature Selection
    • Ensemble Methods in Machine Learning
    • Support Vector Machine in Machine Learning
    • Decision Making Techniques
    • Restricted Boltzmann Machine
    • Regularization Machine Learning
    • What is Regression?
    • What is Linear Regression?
    • Dataset for Linear Regression
    • Decision tree limitations
    • What is Decision Tree?
    • What is Random Forest
  • Algorithms
    • Machine Learning Algorithms
    • Apriori Algorithm in Machine Learning
    • Types of Machine Learning Algorithms
    • Bayes Theorem
    • AdaBoost Algorithm
    • Classification Algorithms
    • Clustering Algorithm
    • Gradient Boosting Algorithm
    • Mean Shift Algorithm
    • Hierarchical Clustering Algorithm
    • Hierarchical Clustering Agglomerative
    • What is a Greedy Algorithm?
    • What is Genetic Algorithm?
    • Random Forest Algorithm
    • Nearest Neighbors Algorithm
    • Weak Law of Large Numbers
    • Ray Tracing Algorithm
    • SVM Algorithm
    • Naive Bayes Algorithm
    • Neural Network Algorithms
    • Boosting Algorithm
    • XGBoost Algorithm
    • Pattern Searching
    • Loss Functions in Machine Learning
    • Decision Tree in Machine Learning
    • Hyperparameter Machine Learning
    • Unsupervised Machine Learning
    • K- Means Clustering Algorithm
    • KNN Algorithm
    • Monty Hall Problem
  • Supervised
    • What is Supervised Learning
    • Supervised Machine Learning
    • Supervised Machine Learning Algorithms
    • Perceptron Learning Algorithm
    • Simple Linear Regression
    • Polynomial Regression
    • Multivariate Regression
    • Regression in Machine Learning
    • Hierarchical Clustering Analysis
    • Linear Regression Analysis
    • Support Vector Regression
    • Multiple Linear Regression
    • Linear Algebra in Machine Learning
    • Statistics for Machine Learning
    • What is Regression Analysis?
    • Clustering Methods
    • Backward Elimination
    • Ensemble Techniques
    • Bagging and Boosting
    • Linear Regression Modeling
    • What is Reinforcement Learning
  • Classification
    • Kernel Methods in Machine Learning
    • Clustering in Machine Learning
    • Machine Learning Architecture
    • Automation Anywhere Architecture
    • Machine Learning C++ Library
    • Machine Learning Frameworks
    • Data Preprocessing in Machine Learning
    • Data Science Machine Learning
    • Classification of Neural Network
    • Neural Network Machine Learning
    • What is Convolutional Neural Network?
    • Single Layer Neural Network
    • Kernel Methods
    • Forward and Backward Chaining
    • Forward Chaining
    • Backward Chaining
  • Deep Learning
    • What Is Deep learning
    • Overviews Deep Learning
    • Application of Deep Learning
    • Careers in Deep Learnings
    • Deep Learning Frameworks
    • Deep Learning Model
    • Deep Learning Algorithms
    • Deep Learning Technique
    • Deep Learning Networks
    • Deep Learning Libraries
    • Deep Learning Toolbox
    • Types of Neural Networks
    • Convolutional Neural Networks
    • Create Decision Tree
    • Deep Learning for NLP
    • Caffe Deep Learning
    • Deep Learning with TensorFlow
  • RPA
    • What is RPA
    • What is Robotics?
    • Benefits of RPA
    • RPA Applications
    • Types of Robots
    • RPA Tools
    • Line Follower Robot
    • What is Blue Prism?
    • RPA vs BPM
  • UiPath
    • What is UiPath
    • UiPath Action Center
    • UiPath?Orchestrator
    • UiPath web automation
    • UiPath Orchestrator API
    • UiPath Delay
    • UiPath Careers
    • UiPath Architecture
    • UiPath version
    • Uipath Reframework
    • UiPath Studio
  • Interview Questions
    • Deep Learning Interview Questions And Answer
    • Machine Learning Cheat Sheet

Related Courses

Machine Learning Training

Deep Learning Training

Artificial Intelligence Training

Footer
About Us
  • Blog
  • Who is EDUCBA?
  • Sign Up
  • Live Classes
  • Corporate Training
  • Certificate from Top Institutions
  • Contact Us
  • Verifiable Certificate
  • Reviews
  • Terms and Conditions
  • Privacy Policy
  •  
Apps
  • iPhone & iPad
  • Android
Resources
  • Free Courses
  • Database Management
  • Machine Learning
  • All Tutorials
Certification Courses
  • All Courses
  • Data Science Course - All in One Bundle
  • Machine Learning Course
  • Hadoop Certification Training
  • Cloud Computing Training Course
  • R Programming Course
  • AWS Training Course
  • SAS Training Course

© 2022 - EDUCBA. ALL RIGHTS RESERVED. THE CERTIFICATION NAMES ARE THE TRADEMARKS OF THEIR RESPECTIVE OWNERS.

EDUCBA

*Please provide your correct email id. Login details for this Free course will be emailed to you

By signing up, you agree to our Terms of Use and Privacy Policy.

This website or its third-party tools use cookies, which are necessary to its functioning and required to achieve the purposes illustrated in the cookie policy. By closing this banner, scrolling this page, clicking a link or continuing to browse otherwise, you agree to our Privacy Policy

EDUCBA Login

Forgot Password?

By signing up, you agree to our Terms of Use and Privacy Policy.

Let’s Get Started

By signing up, you agree to our Terms of Use and Privacy Policy.

EDUCBA
Free Data Science Course

Hadoop, Data Science, Statistics & others

*Please provide your correct email id. Login details for this Free course will be emailed to you

By signing up, you agree to our Terms of Use and Privacy Policy.

EDUCBA
Free Data Science Course

Hadoop, Data Science, Statistics & others

*Please provide your correct email id. Login details for this Free course will be emailed to you

By signing up, you agree to our Terms of Use and Privacy Policy.

Loading . . .
Quiz
Question:

Answer:

Quiz Result
Total QuestionsCorrect AnswersWrong AnswersPercentage

Explore 1000+ varieties of Mock tests View more

Special Offer - Machine Learning Training Learn More