Updated April 4, 2023
Introduction to PyTorch Lightning
A library available in Python language for free where the interference happens with a deep learning framework, PyTorch, is called PyTorch Lightning. The code is organized so that different experiments can be created and restructured with various inputs. Furthermore, scalable models in deep learning can be created easily using this library, where these models can be kept intact without making any contact with the hardware. This was initially released in 2019 May and can be used on multiple platforms. Though William Falcon is the original author, there are various developers, and hence the credit cannot be given to one person alone.
What is PyTorch Lightning?
PyTorch Lightning is an AI research tool mostly preferred for its high performance where deep learning boilerplate can be abstracted easily so that we have control over the code we are writing in Python. Lightning helps to scale the models, and with this, code enhancement can be done based on our requirement, and this will not scale the boilerplate. A structure is given to the research code in all the ways by the Lightning module with the help of indices and many other components.
There are five sections for organizing code into the Lightning module. They are computations, train loop, validation loop, test loop, and optimizers. First, we have to init to define the computations and forward them to know where the code is pointing to from one end. Then, Training_step is the full training loop of the code, and validation_step is the full validation loop of the code. Similarly, we have test_step for the full testing loop and configure_optimizers to explain the module’s optimizers and schedulers.
Lightning transformers are used as an interface for training transformer models based on SOTA. We can use Lightning callbacks, accelerators, or loggers that help in better performance for training the data. Speed optimizations such as DeepSpeed ZeRo and FairScale Sharded Training can be used to enhance memory and improve performance. We can swap the models and add more configurations based on optimizers and schedulers using Hydra, a config composition. We also have an option of building from scratch with the help of transformer task abstraction that helps in the research and experimentation of the code.
These tasks help to train models with transformer models and datasets, or we can use Hydra to swap the models. The lighting module has several options like callbacks, accelerators, scaling, and many other advantages that help in managing the code based on requirements and customizations.
Here a project about lightning transformers is considered into focus.
The first step is to install the module.
pip install lightning-transformers
Now we must take the code from the source.
git clone https://github.com/PyTorchLightning/lightning-transformers.git cd lightning-transformers pip install
PyTorch Lightning – Model
We can design multi-layered neural networks using PyTorch Lightning.
import torch from torch.nn import functional as Fun from torch import nn from pytorch_lightning.core.lightning import LightningModule class LitMNIST(LightningModule): def __init__(self): super().__init__() self.layer_1 = nn.Linear(14 * 14, 144) self.layer_2 = nn.Linear(144, 288) self.layer_3 = nn.Linear(288, 10) def forward(self, x): batch_size, channels, height, width = x.size() x = x.view(batch_size, -1) x = self.layer_1(x) x = Fun.relu(x) x = self.layer_2(x) x = Fun.relu(x) x = self.layer_3(x) x = Fun.log_softmax(x, dim=1) return x
We can use a Lightning module like the PyTorch module and make necessary changes. Here we are using the MNIST dataset.
class LitMNIST(LightningModule): def training_step(self, batch, batch_idx): x, y = batch logits = self(x) loss = Fun.nll_loss(logits, y) return loss
PyTorch Lightning – Data
DataLoader is needed for Lightning modules to operate. The following code explains the data using the MNIST dataset.
from torch.utils.data import DataLoader, random_split from torchvision.datasets import MNIST import os from torchvision import datasets, transforms transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))]) mnist_train = MNIST(os.getcwd(), train=True, download=True, transform=transform) mnist_train = DataLoader(mnist_train, batch_size=64)
DataLoaders can be used in different ways in the Lightning module. For example, the fit function can be used in the dataloader.
model = LitMNIST() trainer = Trainer() trainer.fit(model, mnist_train)
We can use a lightning module inside DataLoaders for the fast processing of data in research models.
class LitMNIST(pl.LightningModule): def train_dataloader(self): transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))]) mnist_train = MNIST(os.getcwd(), train=True, download=True, transform=transform) return DataLoader(mnist_train, batch_size=64) def val_dataloader(self): transforms = ... mnist_val = ... return DataLoader(mnist_val, batch_size=64) def test_dataloader(self): transforms = ... mnist_test = ... return DataLoader(mnist_test, batch_size=64)
We can use Datasets inside DataLoaders and make it functional without any additional information.
class MyDataModule(LightningDataModule): def __init__(self): super().__init__() self.train_dims = None self.vocab_size = 0 def prepare_data(self): download_dataset() tokenize() build_vocab() def setup(self, stage: Optional[str] = None): vocab = load_vocab() self.vocab_size = len(vocab) self.train, self.val, self.test = load_datasets() self.train_dims = self.train.next_batch.size() def train_dataloader(self): transforms = ... return DataLoader(self.train, batch_size=64) def val_dataloader(self): transforms = ... return DataLoader(self.val, batch_size=64) def test_dataloader(self): transforms = ... return DataLoader(self.test, batch_size=64)
Dataset definitions can be easily fetched from the data modules.
mnist_dm = MNISTDatamodule() model = LitModel(num_classes=mnist_dm.num_classes) trainer.fit(model, mnist_dm) imagenet_dm = ImagenetDatamodule() model = LitModel(num_classes=imagenet_dm.num_classes) trainer.fit(model, imagenet_dm)
PyTorch Lightning examples
Initially, we must install PyTorch and give the model format so that PyTorch will be aware of the dataset present in the code. Then, we should add the training details, scheduler, and optimizer in the model and present them in the code. Finally, we can load the data using the following code.
import pytorch-lightning as pylight from torchvision import datasets,transforms from torch.utils.data import DataLoader class Data(pl.LightningDataModule): def prepare_data(self): transform=transforms.Compose([ transforms.ToTensor() ]) self.train_data = datasets.MNIST('', train=True, download=True, transform=transform) self.test_data = datasets.MNIST('', train=False, download=True, transform=transform) def train_dataloader(self): return DataLoader(self.train_data, batch_size= 32, shuffle=True) def val_dataloader(self): return DataLoader(self.test_data, batch_size= 32, shuffle=True) class model(pl.LightningModule): def __init__(self): super(model,self).__init__() self.fc1 = nn.Linear(28*28,256) self.fc2 = nn.Linear(256,128) self.out = nn.Linear(128,10) self.lr = 0.01 self.loss = nn.CrossEntropyLoss() def configure_optimizers(self): return SGD(self.parameters(),lr = self.lr) def training_step(self, train_batch, batch_idx): x, y = train_batch logits = self.forward(x) loss = self.loss(logits,y) return loss
It is easy to use the Lightning module as the readability is more where it avoids all the engineering code and uses only the known modules in Python. Moreover, it is easy to track the code changes, and hence the reproducibility is easy in PyTorch Lightning. Also, lightning helps to run codes in GPU, CPU, and clusters without any additional management.
We hope that this EDUCBA information on “PyTorch Lightning” was beneficial to you. You can view EDUCBA’s recommended articles for more information.