How to use a DataLoader in PyTorch?
Operating with large datasets requires loading them into memory all at once. In most cases, we face a memory outage due to the limited amount of memory available in the system. Also, the programs tend to run slowly due to heavy datasets loaded once. PyTorch offers a solution for parallelizing the data loading process with automatic batching by using DataLoader. Dataloader has been used to parallelize the data loading as this boosts up the speed and saves memory.
The dataloader constructor resides in the torch.utils.data package. It has various parameters among which the only mandatory argument to be passed is the dataset that has to be loaded, and the rest all are optional arguments.
DataLoader(dataset, shuffle=True, sampler=None, batch_size=32)
DataLoaders on Custom Datasets:
To implement dataloaders on a custom dataset we need to override the following two subclass functions:
- The _len_() function: returns the size of the dataset.
- The _getitem_() function: returns a sample of the given index from the dataset.
DataLoaders on Built-in Datasets: