EDUCBA

EDUCBA

MENUMENU
  • Free Tutorials
  • Free Courses
  • Certification Courses
  • 360+ Courses All in One Bundle
  • Login

PyTorch Pretrained Models

Home » Data Science » Data Science Tutorials » Machine Learning Tutorial » PyTorch Pretrained Models

PyTorch Pretrained Models

Introduction to PyTorch Pretrained Models

When a model built in PyTorch can be used to solve the similar kind of problems, those models are called pretrained models and the developers have a starting point to work on the problem. This will not be accurate as the model requirements, but it saves time for building the model from the scratch as there is something to work upon. Only few layers of the architecture need to be trained instead of the whole model and hence the time is saved.

PyTorch Pretrained Models Overviews

  • In transfer learning, there are two types such as feature extraction and finetuning. Finetuning as the name suggests, we are considering a model for our problem, assigning all our parameters to this model and changing it completely to work with our requirements. Hence, here we are finetuning the pretrained model to work with our needs.
  • Feature extraction, on the other hand, we start working with the pretrained model and changes only those layers where rework is needed. Here only output layer is changed and the pretrained model is the base of all the feature extractors in the model. Here initializing the model is very much needed in both cases so that we will get the model as per our problem.

Use PyTorch Pretrained Models

Given below shows use of PyTorch pretrained models:

Start Your Free Data Science Course

Hadoop, Data Science, Statistics & others

Code:

def training_model(model, crit, optim, scheduler, number_epochs=20):
since_model = time.time()
best_model_weights = copy.deepcopy(model.state_dict())
best_accuracy = 0.0
for epoch in range(number_epochs):
print('Epoch {}/{}'.format(epoch, number_epochs - 1))
print('-' * 5)
for phases in ['train', 'val']:
if phases == 'train':
model.train()
else:
model.eval()
run_loss = 0.0
run_corrects = 0
for in, titles in dataloaders[phase]:
in = in.to(device)
titles = titles.to(device)
optim.zero_grad()
with torch.set_grad_enabled(phase == 'training'):
out = model(in)
_, preds = torch.max(out, 2)
loss = crit(out, titles)
if phase == 'training':
loss.backward()
optim.step()
run_loss += loss.item() * in.size(0)
run_corrects += torch.sum(preds == titles.data)
if phase == 'training':
scheduler.step()
epoch_loss = run_loss / dataset_sizes[phase] epoch_accuracy = run_corrects.double() / dataset_sizes[phase] print('{} Loss function: {:.4f} Acc: {:.4f}'.format(
phase, epoch_loss, epoch_acc))
if phase == 'val' and epoch_accuracy > best_accuracy:
best_accuracy = epoch_accuracy
best_model_weights = copy.deepcopy(model.state_dict())
print()
time_elapsed = time.time() – since_model
print('Training model complete in {:.0f}m {:.0f}s'.format(
time_elapsed // 50, time_elapsed % 50))
print('Best val Accuracy: {:4f}'.format(best_accuracy))
model.load_state_dict(best_model_weights)
return model
def new_model(model, number_images=5):
training = model.training
model.eval()
images_old = 0
fig = plot.figure()
with torch.no_grad():
for k, (in, titles) in enumerate(dataloaders['val']):
in = in.to(device)
titles = titles.to(device)
out = model(in)
_, predictions = torch.max(out, 1)
for l in range(in.size()[0]):
images_old+= 1
axes = plot.subplot(number_images//2, 2, images_old)
axes.axis('off')
axes.set_title('predicted: {}'.format(class_names[preds[l]]))
imshow(inputs.cpu().data[j])
if images_old == number_images:
model.train(mode=training)
return
model.train(mode=training)
model_conversion = torchvision.models.resnet18(pretrained=True)
for parameters in model_conv.parameters():
parameters.requires_grad = False
number_features = model_conversion.fc.in_features
model_conversion.fc = nn.Linear(number_features, 5)
model_conversion = model_conversion.to(device)
crit = nn.CrossEntropyLoss()
optimizer_conversion = optim.SGD(model_conversion.fc.parameters(), learning_rate=0.005, momentum=0.09)
exp_lr_scheduler = lr_scheduler.StepLR(optimizer_conversion, step_size=5, gamma=0.5)
model_conversion = train_model(model_conversion, crit, optimizer_conversion,
exp_lr_scheduler, num_epochs=20)
visualize_model(model_conversion)
plot.ioff()
plot.show()

How can we Load PyTorch Pretrained Models?

Following example shows how to load a pretrained model based on torchvision:

Code:

import torch
import torchvision.models as models
model_new = models.vgg16(pretrained=True)
torch.save(model_new.state_dict(), 'model_wts.pth')
model_new = models.vgg16()
model_new.load_state_dict(torch.load('model_wts.pth'))
model_new.eval()
torch.save(model_new, 'model_new.pth')
model_new = torch.load('model_new.pth')

Following code shows how to load pretrained model in various image modules:

Code:

import torchvision.models as models_set
resnet = models_set.resnet(pretrained=True)
alexnet = models_set.alexnet(pretrained=True)
squeezenet = models_set.squeezenet(pretrained=True)
vgg = models_set.vgg(pretrained=True)
densenet = models_set.densenet(pretrained=True)
inception = models_set.inception(pretrained=True)
googlenet = models_set.googlenet(pretrained=True)
shufflenet = models_set.shufflenet(pretrained=True)
mobilenet_v2 = models_set.mobilenet(pretrained=True)
mobilenet_large = models_Set.mobilenet_large(pretrained=True)
mobilenet_small = models_set.mobilenet_small(pretrained=True)
resnext= models_set.resnext(pretrained=True)
wide_resnet= models_set.wide_resnet(pretrained=True)
mnasnet = models_set.mnasnet(pretrained=True)
efficientnet_modelname = models_set.efficientnet_modelname(pretrained=True)
regnet_y_mf = models_set.regnet_y_mf(pretrained=True)

Popular Course in this category
Sale
Machine Learning Training (20 Courses, 29+ Projects)19 Online Courses | 29 Hands-on Projects | 178+ Hours | Verifiable Certificate of Completion | Lifetime Access
4.7 (14,249 ratings)
Course Price

View Course

Related Courses
Deep Learning Training (17 Courses, 24+ Projects)Artificial Intelligence Training (5 Courses, 2 Project)

Pretrained value is always boolean and true represents whether a pretrained model must be returned on the input image. We can also set progress to a Boolean value to return progress bar.

Image Classification using Pretrained Models

Pretrained models are used mostly in neural networks with huge datasets and mostly it is used in ImageNet. This helps us to advance in various fields and now it is fairly used in Computer Vision research. These models are considered as state of art models so that we need not construct models from the scratch.

First step is to do model inference where there are several steps such as analyzing the input image and transforming it based on the available models. We can do forward pass as well where we can find the output vector using the pretrained weights. This helps us to predict the output vectors and hence model inference is completed. As an example, we will load the pretrained model in torchvision. First step is to install torchvision module. Now let us import all the models from torchvision so that we can see all the models and architectures in the model. We can see AlexNet model from the output along with many others and we will use AlexNet for our image classification.

We have to create an instance for the network.

alexnet = models.alexnet(pretrained=True)

We will print the same to know the output and there will be number of layers in the same. Now we have to transform the input image for required mean and standard deviation. We must make sure that these values must be close enough for the pretrained models mean and standard deviation. If needed, we can transform the model using transform function in the AlexNet module. Next step is to load the input model and transform it based on our requirements. We will pre-process the image for our needs and this batch is passed through the network.

We have to evaluate the model to see what happens in the model. Now, we have to compare the model with top-1 error, top-5 error, inference time on CPU and GPU and model size. These parameters helps us to decide the model’s confidence with the present pretrained models. Top-1 error must be lower and inference on GPU must be lower for better models. It is good to have model size also to be in low side.

Conclusion

We must compare the models for its accuracy and check the inference time. This model comparison helps us to choose which model to be selected from pretraining models. We can also use transfer learning to train any model from the custom dataset in PyTorch.

Recommended Articles

This is a guide to PyTorch Pretrained Models. Here we discuss introduction, overviews, how can we load PyTorch pretrained models? image classification. You may also have a look at the following articles to learn more –

  1. PyTorch Versions
  2. torch.nn Module
  3. Tensorflow Basics
  4. Introduction to Tensorflow

All in One Data Science Bundle (360+ Courses, 50+ projects)

360+ Online Courses

50+ projects

1500+ Hours

Verifiable Certificates

Lifetime Access

Learn More

0 Shares
Share
Tweet
Share
Primary Sidebar
Machine Learning Tutorial
  • PyTorch
    • PyTorch Tensors
    • What is PyTorch?
    • PyTorch MSELoss()
    • PyTorch NLLLOSS
    • PyTorch MaxPool2d
    • PyTorch Pretrained Models
    • PyTorch Squeeze
    • PyTorch Reinforcement Learning
    • PyTorch zero_grad
    • PyTorch norm
    • PyTorch VAE
    • PyTorch Early Stopping
    • PyTorch requires_grad
    • PyTorch MNIST
    • PyTorch Conv2d
    • Dataset Pytorch
    • PyTorch tanh
    • PyTorch bmm
    • PyTorch profiler
    • PyTorch unsqueeze
    • PyTorch adam
    • PyTorch backward
    • PyTorch concatenate
    • PyTorch Embedding
    • PyTorch Tensor to NumPy
    • PyTorch Normalize
    • PyTorch ReLU
    • PyTorch Autograd
    • PyTorch Transpose
    • PyTorch Object Detection
    • PyTorch Autoencoder
    • PyTorch Loss
    • PyTorch repeat
    • PyTorch gather
    • PyTorch sequential
    • PyTorch U-NET
    • PyTorch Sigmoid
    • PyTorch Neural Network
    • PyTorch Quantization
    • PyTorch Ignite
    • PyTorch Versions
    • PyTorch TensorBoard
    • PyTorch Dropout
    • PyTorch Model
    • PyTorch optimizer
    • PyTorch ResNet
    • PyTorch CNN
    • PyTorch Detach
    • Single Layer Perceptron
    • PyTorch vs Keras
    • torch.nn Module
  • Basic
    • Introduction To Machine Learning
    • What is Machine Learning?
    • Uses of Machine Learning
    • Applications of Machine Learning
    • Naive Bayes in Machine Learning
    • Dataset Labelling
    • DataSet Example
    • Dataset ZFS
    • Careers in Machine Learning
    • What is Machine Cycle?
    • Machine Learning Feature
    • Machine Learning Programming Languages
    • What is Kernel in Machine Learning
    • Machine Learning Tools
    • Machine Learning Models
    • Machine Learning Platform
    • Machine Learning Libraries
    • Machine Learning Life Cycle
    • Machine Learning System
    • Machine Learning Datasets
    • Machine Learning Certifications
    • Machine Learning Python vs R
    • Optimization for Machine Learning
    • Types of Machine Learning
    • Machine Learning Methods
    • Machine Learning Software
    • Machine Learning Techniques
    • Machine Learning Feature Selection
    • Ensemble Methods in Machine Learning
    • Support Vector Machine in Machine Learning
    • Decision Making Techniques
    • Restricted Boltzmann Machine
    • Regularization Machine Learning
    • What is Regression?
    • What is Linear Regression?
    • Dataset for Linear Regression
    • Decision tree limitations
    • What is Decision Tree?
    • What is Random Forest
  • Algorithms
    • Machine Learning Algorithms
    • Apriori Algorithm in Machine Learning
    • Types of Machine Learning Algorithms
    • Bayes Theorem
    • AdaBoost Algorithm
    • Classification Algorithms
    • Clustering Algorithm
    • Gradient Boosting Algorithm
    • Mean Shift Algorithm
    • Hierarchical Clustering Algorithm
    • Hierarchical Clustering Agglomerative
    • What is a Greedy Algorithm?
    • What is Genetic Algorithm?
    • Random Forest Algorithm
    • Nearest Neighbors Algorithm
    • Weak Law of Large Numbers
    • Ray Tracing Algorithm
    • SVM Algorithm
    • Naive Bayes Algorithm
    • Neural Network Algorithms
    • Boosting Algorithm
    • XGBoost Algorithm
    • Pattern Searching
    • Loss Functions in Machine Learning
    • Decision Tree in Machine Learning
    • Hyperparameter Machine Learning
    • Unsupervised Machine Learning
    • K- Means Clustering Algorithm
    • KNN Algorithm
    • Monty Hall Problem
  • Supervised
    • What is Supervised Learning
    • Supervised Machine Learning
    • Supervised Machine Learning Algorithms
    • Perceptron Learning Algorithm
    • Simple Linear Regression
    • Polynomial Regression
    • Multivariate Regression
    • Regression in Machine Learning
    • Hierarchical Clustering Analysis
    • Linear Regression Analysis
    • Support Vector Regression
    • Multiple Linear Regression
    • Linear Algebra in Machine Learning
    • Statistics for Machine Learning
    • What is Regression Analysis?
    • Clustering Methods
    • Backward Elimination
    • Ensemble Techniques
    • Bagging and Boosting
    • Linear Regression Modeling
    • What is Reinforcement Learning
  • Classification
    • Kernel Methods in Machine Learning
    • Clustering in Machine Learning
    • Machine Learning Architecture
    • Automation Anywhere Architecture
    • Machine Learning C++ Library
    • Machine Learning Frameworks
    • Data Preprocessing in Machine Learning
    • Data Science Machine Learning
    • Classification of Neural Network
    • Neural Network Machine Learning
    • What is Convolutional Neural Network?
    • Single Layer Neural Network
    • Kernel Methods
    • Forward and Backward Chaining
    • Forward Chaining
    • Backward Chaining
  • Deep Learning
    • What Is Deep learning
    • Overviews Deep Learning
    • Application of Deep Learning
    • Careers in Deep Learnings
    • Deep Learning Frameworks
    • Deep Learning Model
    • Deep Learning Algorithms
    • Deep Learning Technique
    • Deep Learning Networks
    • Deep Learning Libraries
    • Deep Learning Toolbox
    • Types of Neural Networks
    • Convolutional Neural Networks
    • Create Decision Tree
    • Deep Learning for NLP
    • Caffe Deep Learning
    • Deep Learning with TensorFlow
  • RPA
    • What is RPA
    • What is Robotics?
    • Benefits of RPA
    • RPA Applications
    • Types of Robots
    • RPA Tools
    • Line Follower Robot
    • What is Blue Prism?
    • RPA vs BPM
  • UiPath
    • What is UiPath
    • UiPath Action Center
    • UiPath?Orchestrator
    • UiPath web automation
    • UiPath Orchestrator API
    • UiPath Delay
    • UiPath Careers
    • UiPath Insights
    • UiPath Split String
    • UiPath Installation
    • UiPath Filter Data Table
    • UiPath Test Suite
    • UiPath Competitors
    • UiPath Architecture
    • UiPath version
    • Uipath Reframework
    • UiPath Studio
  • Interview Questions
    • Deep Learning Interview Questions And Answer
    • Machine Learning Cheat Sheet

Related Courses

Machine Learning Training

Deep Learning Training

Artificial Intelligence Training

Footer
About Us
  • Blog
  • Who is EDUCBA?
  • Sign Up
  • Live Classes
  • Corporate Training
  • Certificate from Top Institutions
  • Contact Us
  • Verifiable Certificate
  • Reviews
  • Terms and Conditions
  • Privacy Policy
  •  
Apps
  • iPhone & iPad
  • Android
Resources
  • Free Courses
  • Database Management
  • Machine Learning
  • All Tutorials
Certification Courses
  • All Courses
  • Data Science Course - All in One Bundle
  • Machine Learning Course
  • Hadoop Certification Training
  • Cloud Computing Training Course
  • R Programming Course
  • AWS Training Course
  • SAS Training Course

© 2022 - EDUCBA. ALL RIGHTS RESERVED. THE CERTIFICATION NAMES ARE THE TRADEMARKS OF THEIR RESPECTIVE OWNERS.

EDUCBA
Free Data Science Course

Hadoop, Data Science, Statistics & others

*Please provide your correct email id. Login details for this Free course will be emailed to you

By signing up, you agree to our Terms of Use and Privacy Policy.

EDUCBA
Free Data Science Course

Hadoop, Data Science, Statistics & others

*Please provide your correct email id. Login details for this Free course will be emailed to you

By signing up, you agree to our Terms of Use and Privacy Policy.

EDUCBA Login

Forgot Password?

By signing up, you agree to our Terms of Use and Privacy Policy.

Let’s Get Started

By signing up, you agree to our Terms of Use and Privacy Policy.

EDUCBA

*Please provide your correct email id. Login details for this Free course will be emailed to you

By signing up, you agree to our Terms of Use and Privacy Policy.

This website or its third-party tools use cookies, which are necessary to its functioning and required to achieve the purposes illustrated in the cookie policy. By closing this banner, scrolling this page, clicking a link or continuing to browse otherwise, you agree to our Privacy Policy

Loading . . .
Quiz
Question:

Answer:

Quiz Result
Total QuestionsCorrect AnswersWrong AnswersPercentage

Explore 1000+ varieties of Mock tests View more

Independence Day Offer - Machine Learning Training Learn More