Introduction to PyTorch MNIST
The following article provides an outline for PyTorch MNIST. The Modified National Institute of Standards and Technology database or MNIST has all the useful details corresponding to image processing systems in various use cases. This database is helpful in many cases of machine learning, and deep learning as image processing details can be fetched directly from the database without any difficulties. In addition, we have digits from 0 to 9 where a baseline is available to test all the image processing systems.
What is PyTorch MNIST?
- The database is a reliable source for all data scientists as it is considered as the welcome note of machine learning.
- If any new architecture or framework is built, data scientists can train the algorithm on the MNIST to check whether the framework is working fine.
- We have a training and test dataset in MNIST with 60000 and 10000 examples, respectively, in each dataset.
- The images present inside the dataset are of the same size where the digits are present and normalized.
PyTorch MNIST Model
We are downloading MNIST dataset and using it in the PyTorch model.
from torchvision import datasets
from torchvision.transforms import ToTensor
train_dataset = datasets.MNIST(
root = 'datasets',
train = True,
transform = ToTensor(),
download = True,
test_dataset = datasets.MNIST(
root = 'datasets',
train = False,
transform = ToTensor()
import matplotlib.pyplot as plot
plot.title('%i' % train_dataset.targets)
images = plot.figure(figsize=(15, 10))
columns, rows = 7, 7
for p in range(2, columns * rows + 1):
sample_idx = torch.randint(len(train_dataset), size=(1,)).item()
img, labels = train_dataset[sample_idx] figure.add_subplot(rows, columns, p)
self.conv1 = nn.Sequential(
self.conv2 = nn.Sequential(
nn.Conv2d(26, 44, 4, 2, 3),
self.output = nn.Linear(44 * 8 * 8, 5)
def forward(self, a):
a = self.conv1(a)
a = self.conv2(a)
a = a.view(a.size(0), -1)
result = self.out(a)
return result, a
from torch.autograd import Variable
number_epochs = 8
def train_dataset(number_epochs, cnn, loaders):
total_step = len(loaders['train_dataset'])
for epoch in range(number_epochs):
for p, (images, labels) in enumerate(loaders['train_dataset']):
w_x = Variable(images)
w_y = Variable(labels)
output = cnn(w_x) loss = loss_func(output, w_y)
if (p+1) % 100 == 0:
.format(epoch + 1, number_epochs, i + 1, total_step, loss.item()))
train(number_epochs, cnn, loaders)
- We must install Pytorch binaries when we have the system with NVIDIA GPU. All the inbuilt binaries are supported in PyTorch libraries, and hence this will be a handful to start with PyTorch. If needed, a simple program in CUDA will explain whether the import of PyTorch is working with the system. If we do not have NVIDIA GPU, then CUDA installation is needed before importing the PyTorch module for which documentation are provided.
- Now to start with MNIST, we must install the determined cluster to start with. Multiple GPU servers can be used for on-premise deployments where we can start the cluster with a single command. Now, we have to import the model in PyTorch to MNIST dataset so that we can check the architecture is working well. A built-in training loop is present inside the module where we can use batches of data into the forward pass to do all the calculations.
- The next steps to perform are as follows: initializing the code, building the model, followed by optimizer definition, and defining the forward pass. The final step is to load the training dataset and validate the same.
Using PyTorch on MNIST Dataset
- It is easy to use PyTorch in MNIST dataset for all the neural networks. DataLoader module is needed with which we can implement a neural network, and we can see the input and hidden layers. Activation functions need to be applied with loss and optimizer functions so that we can implement the training loop. Now we can see the model and test it for accuracy of the model.
- All the parameters for the model must be defined first after importing the needed libraries. The next step is to load the MNIST dataset and dataloader, where we can specify the same batch size. Then, since we have hidden layers in the network, we must use the ReLu activation function and the PyTorch neural network module. Finally, we must look for a feed-forward method in the dataset and apply the changes to the layers.
- Softmax is not needed here as cross-entropy will function automatically to all the layers. After setting the loss and optimizer function in the dataset, a training loop must be created. All the images required for processing are reshaped so that input size and loss are calculated easily. We can do the final testing now, and gradients need not be computed here.
Example of PyTorch MNIST
Given below is the example mentioned:
The first step is to set up the environment by importing a torch and torchvision.
import matplotlib.pyplot as plot
num_epochs = 5
train_size_batch = 32
test_size_batch = 5000
lr_rate = 0.05
momentum = 0.75
log_intervals = 5
seeds = 2
torch.backends_enabled = False
train_load = torch.utils.data.DataLoader(
torchvision.datasets.MNIST('/filesaved/', train=True, download=True,
batch = train_size_batch, shuffle=True)
test_load = torch.utils.data.DataLoader(
torchvision.datasets.MNIST('/filesave/', train=False, download=True,
batch_size= test_size_batch, shuffle=True)
example_dataset = enumerate(test_load)
batch_idx, (example_log, example_results) = next(example_dataset)
images = plot.figure()
for x in range(5):
plot.imshow(example_data[x], cmap='blue', interpolation='none')
import torch.nn as netnn
import torch.nn.functional as Fun
import torch.optim as optimnet
self.conv1 = netn.Conv2d(1, 20, kernel_size=10)
self.conv2 = nn.Conv2d(10, 40, kernel_size=10)
self.conv2_drop = nn.Dropout2d()
self.fc1 = nn.Linear(390, 70)
self.fc2 = nn.Linear(60, 20)
def forwardnetwork(self, a):
a = Fun.relu(Fun.max_pool2d(self.conv1(a), 2))
a = Fun.relu(Fun.max_pool2d(self.conv2_drop(self.conv2(a)), 2))
a = a.view(-1, 320)
a = Fun.relu(self.fc1(a))
a = Fun.dropout(a, training=self.training)
a = self.fc2(a)
We can use MNIST in supervised learning where classifiers can be trained. This dataset, along with the machine learning dataset, helps data scientists in many aspects to discover different modes of training and give a broad description of the data being used in the dataset. A labelled dataset is preferred in these cases.
This is a guide to PyTorch MNIST. Here we discuss the introduction, PyTorch MNIST model, prerequisites, and example, respectively. You may also have a look at the following articles to learn more –