External sorting is a term for a class of sorting algorithms that can handle massive amounts of data. External sorting is required when the data being sorted do not fit into the main memory of a computing device (usually RAM) and instead, they must reside in the slower external memory (usually a hard drive). External sorting typically uses a hybrid sort-merge strategy. In the sorting phase, chunks of data small enough to fit in main memory are read, sorted, and written out to a temporary file. In the merge phase, the sorted sub-files are combined into a single larger file.
One example of external sorting is the external merge sort algorithm, which sorts chunks that each fit in RAM, then merges the sorted chunks together. We first divide the file into runs such that the size of a run is small enough to fit into main memory. Then sort each run in main memory using merge sort sorting algorithm. Finally merge the resulting runs together into successively bigger runs, until the file is sorted.
When We do External Sorting :
1. When the unsorted data is too large to perform sorting in computer internal memory then we use external sorting.
2. In external sorting we use the secondary device. in a secondary storage device, we use the tape disk array.
3. when data is large like in merge sort and quick sort.
4. Quick Sort: best average runtime.
5. Merge Sort: Best Worse case time.
6. To perform sort-merge, join operation on data.
7. To perform order by the query.
8. To select duplicate element.
9. Where we need to take large input from the user.
- Merge sort
- Tape sort
- Polyphase sort
- External radix
- External merge
Prerequisite for the algorithm/code:
MergeSort: Used for sort individual runs (a run is part of file that is small enough to fit in main memory)
Merge K Sorted Arrays: Used to merge sorted runs.
Below are the steps used in C++ implementation.
input_file : Name of input file. input.txt output_file : Name of output file, output.txt run_size : Size of a run (can fit in RAM) num_ways : Number of runs to be merged
The idea is very simple, All the elements cannot be sorted at once as the size is very large. So the data is divided into chunks and then sorted using merge sort. The sorted data is then dumped into files. As such huge amount of data cannot be handled altogether. Now After sorting the individual chunks. Sort the whole array by using the idea of merge k sorted arrays.
- Read input_file such that at most ‘run_size’ elements are read at a time. Do following for the every run read in an array.
- Sort the run using MergeSort.
- Store the sorted array in a file. Lets say ‘i’ for ith file.
- Merge the sorted files using the approach discussed merge k sorted arrays
Following is C++ implementation of the above steps.
- Time Complexity: O(n * log n).
Time taken for merge sort is O(runs * run_size * log run_size), which is equal to O(n log run_size). To merge the sorted arrays the time complexity is O(n * log runs). Therefore, the overall time complexity is O(n * log run_size + n * log runs). Since log run_size + log runs = log run_size*runs = log n, the result time complexity will be O(n * log n).
- Auxiliary space:O(run_size).
run_size is the space needed to store the array.
Note: This code won’t work on online compiler as it requires file creation permissions. When run local machine, it produces sample input file “input.txt” with 10000 random numbers. It sorts the numbers and puts the sorted numbers in a file “output.txt”. It also generates files with names 1, 2, .. to store sorted runs.
This article is contributed by Aditya Goel. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above