Updated April 5, 2023

## Introduction to PyTorch Pretrained Models

When a model built in PyTorch can be used to solve the similar kind of problems, those models are called pretrained models and the developers have a starting point to work on the problem. This will not be accurate as the model requirements, but it saves time for building the model from the scratch as there is something to work upon. Only few layers of the architecture need to be trained instead of the whole model and hence the time is saved.

### PyTorch Pretrained Models Overviews

- In transfer learning, there are two types such as feature extraction and finetuning. Finetuning as the name suggests, we are considering a model for our problem, assigning all our parameters to this model and changing it completely to work with our requirements. Hence, here we are finetuning the pretrained model to work with our needs.
- Feature extraction, on the other hand, we start working with the pretrained model and changes only those layers where rework is needed. Here only output layer is changed and the pretrained model is the base of all the feature extractors in the model. Here initializing the model is very much needed in both cases so that we will get the model as per our problem.

### Use PyTorch Pretrained Models

Given below shows use of PyTorch pretrained models:

**Code:**

```
def training_model(model, crit, optim, scheduler, number_epochs=20):
since_model = time.time()
best_model_weights = copy.deepcopy(model.state_dict())
best_accuracy = 0.0
for epoch in range(number_epochs):
print('Epoch {}/{}'.format(epoch, number_epochs - 1))
print('-' * 5)
for phases in ['train', 'val']:
if phases == 'train':
model.train()
else:
model.eval()
run_loss = 0.0
run_corrects = 0
for in, titles in dataloaders[phase]:
in = in.to(device)
titles = titles.to(device)
optim.zero_grad()
with torch.set_grad_enabled(phase == 'training'):
out = model(in)
_, preds = torch.max(out, 2)
loss = crit(out, titles)
if phase == 'training':
loss.backward()
optim.step()
run_loss += loss.item() * in.size(0)
run_corrects += torch.sum(preds == titles.data)
if phase == 'training':
scheduler.step()
epoch_loss = run_loss / dataset_sizes[phase]
epoch_accuracy = run_corrects.double() / dataset_sizes[phase]
print('{} Loss function: {:.4f} Acc: {:.4f}'.format(
phase, epoch_loss, epoch_acc))
if phase == 'val' and epoch_accuracy > best_accuracy:
best_accuracy = epoch_accuracy
best_model_weights = copy.deepcopy(model.state_dict())
print()
time_elapsed = time.time() – since_model
print('Training model complete in {:.0f}m {:.0f}s'.format(
time_elapsed // 50, time_elapsed % 50))
print('Best val Accuracy: {:4f}'.format(best_accuracy))
model.load_state_dict(best_model_weights)
return model
def new_model(model, number_images=5):
training = model.training
model.eval()
images_old = 0
fig = plot.figure()
with torch.no_grad():
for k, (in, titles) in enumerate(dataloaders['val']):
in = in.to(device)
titles = titles.to(device)
out = model(in)
_, predictions = torch.max(out, 1)
for l in range(in.size()[0]):
images_old+= 1
axes = plot.subplot(number_images//2, 2, images_old)
axes.axis('off')
axes.set_title('predicted: {}'.format(class_names[preds[l]]))
imshow(inputs.cpu().data[j])
if images_old == number_images:
model.train(mode=training)
return
model.train(mode=training)
model_conversion = torchvision.models.resnet18(pretrained=True)
for parameters in model_conv.parameters():
parameters.requires_grad = False
number_features = model_conversion.fc.in_features
model_conversion.fc = nn.Linear(number_features, 5)
model_conversion = model_conversion.to(device)
crit = nn.CrossEntropyLoss()
optimizer_conversion = optim.SGD(model_conversion.fc.parameters(), learning_rate=0.005, momentum=0.09)
exp_lr_scheduler = lr_scheduler.StepLR(optimizer_conversion, step_size=5, gamma=0.5)
model_conversion = train_model(model_conversion, crit, optimizer_conversion,
exp_lr_scheduler, num_epochs=20)
visualize_model(model_conversion)
plot.ioff()
plot.show()
```

### How can we Load PyTorch Pretrained Models?

The following example shows to load a pre-trained model based on torchvision:

**Code:**

```
import torch
import torchvision.models as models
model_new = models.vgg16(pretrained=True)
torch.save(model_new.state_dict(), 'model_wts.pth')
model_new = models.vgg16()
model_new.load_state_dict(torch.load('model_wts.pth'))
model_new.eval()
torch.save(model_new, 'model_new.pth')
model_new = torch.load('model_new.pth')
```

Following code shows how to load pretrained model in various image modules:

**Code:**

```
import torchvision.models as models_set
resnet = models_set.resnet(pretrained=True)
alexnet = models_set.alexnet(pretrained=True)
squeezenet = models_set.squeezenet(pretrained=True)
vgg = models_set.vgg(pretrained=True)
densenet = models_set.densenet(pretrained=True)
inception = models_set.inception(pretrained=True)
googlenet = models_set.googlenet(pretrained=True)
shufflenet = models_set.shufflenet(pretrained=True)
mobilenet_v2 = models_set.mobilenet(pretrained=True)
mobilenet_large = models_Set.mobilenet_large(pretrained=True)
mobilenet_small = models_set.mobilenet_small(pretrained=True)
resnext= models_set.resnext(pretrained=True)
wide_resnet= models_set.wide_resnet(pretrained=True)
mnasnet = models_set.mnasnet(pretrained=True)
efficientnet_modelname = models_set.efficientnet_modelname(pretrained=True)
regnet_y_mf = models_set.regnet_y_mf(pretrained=True)
```

Pretrained value is always boolean and true represents whether a pretrained model must be returned on the input image. We can also set progress to a Boolean value to return progress bar.

### Image Classification using Pretrained Models

Pretrained models are used mostly in neural networks with huge datasets and mostly it is used in ImageNet. This helps us to advance in various fields and now it is fairly used in Computer Vision research. These models are considered as state of art models so that we need not construct models from the scratch.

First step is to do model inference where there are several steps such as analyzing the input image and transforming it based on the available models. We can do forward pass as well where we can find the output vector using the pretrained weights. This helps us to predict the output vectors and hence model inference is completed. As an example, we will load the pretrained model in torchvision. First step is to install torchvision module. Now let us import all the models from torchvision so that we can see all the models and architectures in the model. We can see AlexNet model from the output along with many others and we will use AlexNet for our image classification.

We have to create an instance for the network.

`alexnet = models.alexnet(pretrained=True)`

We will print the same to know the output and there will be number of layers in the same. Now we have to transform the input image for required mean and standard deviation. We must make sure that these values must be close enough for the pretrained models mean and standard deviation. If needed, we can transform the model using transform function in the AlexNet module. Next step is to load the input model and transform it based on our requirements. We will pre-process the image for our needs and this batch is passed through the network.

We have to evaluate the model to see what happens in the model. Now, we have to compare the model with top-1 error, top-5 error, inference time on CPU and GPU and model size. These parameters helps us to decide the model’s confidence with the present pretrained models. Top-1 error must be lower and inference on GPU must be lower for better models. It is good to have model size also to be in low side.

### Conclusion

We must compare the models for its accuracy and check the inference time. This model comparison helps us to choose which model to be selected from pretraining models. We can also use transfer learning to train any model from the custom dataset in PyTorch.

### Recommended Articles

We hope that this EDUCBA information on “PyTorch Pretrained Models” was beneficial to you. You can view EDUCBA’s recommended articles for more information.