External sorting is a term for a class of sorting algorithms that can handle massive amounts of data. External sorting is required when the data being sorted does not fit into the main memory of a computing device (usually RAM) and instead, must reside in the slower external memory (usually a hard drive).
External sorting typically uses a hybrid sort-merge strategy. In the sorting phase, chunks of data small enough to fit in the main memory are read, sorted, and written out to a temporary file. In the merge phase, the sorted sub-files are combined into a single larger file.
The external merge sort algorithm, which sorts chunks that each fit in RAM, then merges the sorted chunks together. We first divide the file into runs such that the size of a run is small enough to fit into the main memory. Then sort each run in the main memory using the merge sort sorting algorithm. Finally merge the resulting runs together into successively bigger runs, until the file is sorted.
When We do External Sorting?
- When the unsorted data is too large to perform sorting in computer internal memory then we use external sorting.
- In external sorting we use the secondary device. in a secondary storage device, we use the tape disk array.
- when data is large like in merge sort and quick sort.
- Quick Sort: best average runtime.
- Merge Sort: Best Worse case time.
- To perform sort-merge, join operation on data.
- To perform order by the query.
- To select duplicate element.
- Where we need to take large input from the user.
- Merge sort
- Tape sort
- Polyphase sort
- External radix
- External merge
Prerequisites: MergeSort, Merge K Sorted Arrays:
input_file: Name of input file. input.txt
output_file: Name of output file, output.txt
run_size: Size of a run (can fit in RAM)
num_ways: Number of runs to be merged
To solve the problem follow the below idea:
The idea is straightforward, All the elements cannot be sorted at once as the size is very large. So the data is divided into chunks and then sorted using merge sort. The sorted data is then dumped into files. As such a huge amount of data cannot be handled altogether. Now After sorting the individual chunks. Sort the whole array by using the idea of merging k sorted arrays.
Follow the below steps to solve the problem:
- Read input_file such that at most ‘run_size’ elements are read at a time. Do following for the every run read in an array.
- Sort the run using MergeSort.
- Store the sorted array in a file. Let’s say ‘i’ for ith file.
- Merge the sorted files using the approach discussed merge k sorted arrays
Below is the implementation of the above approach.
Time Complexity: O(N * log N).
- Time taken for merge sort is O(runs * run_size * log run_size), which is equal to O(N log run_size).
- To merge the sorted arrays the time complexity is O(N * log runs).
- Therefore, the overall time complexity is O(N * log run_size + N * log runs).
- Since log run_size + log runs = log run_size*runs = log N, the result time complexity will be O(N * log N).
Auxiliary space: O(run_size). run_size is the space needed to store the array.
Note: This code won’t work on an online compiler as it requires file creation permissions. When running in a local machine, it produces a sample input file “input.txt” with 10000 random numbers. It sorts the numbers and puts the sorted numbers in a file “output.txt”. It also generates files with names 1, 2, .. to store sorted runs.
This article is contributed by Aditya Goel. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above
Please Login to comment...