PySpark – Split dataframe into equal number of rows
When there is a huge dataset, it is better to split them into equal chunks and then process each dataframe individually. This is possible if the operation on the dataframe is independent of the rows. Each chunk or equally split dataframe then can be processed parallel making use of the resources more efficiently. In this article, we will discuss how to split PySpark dataframes into an equal number of rows.
Creating Dataframe for demonstration:
In the above code block, we have defined the schema structure for the dataframe and provided sample data. Our dataframe consists of 2 string-type columns with 12 records.
Example 1: Split dataframe using ‘DataFrame.limit()’
We will make use of the split() method to create ‘n’ equal dataframes.
Where, Limits the result count to the number specified.
Example 2: Split the dataframe, perform the operation and concatenate the result
We will now split the dataframe in ‘n’ equal parts and perform concatenation operation on each of these parts individually and then concatenate the result to a `result_df`. This is to demonstrate how we can use the extension of the previous code to perform a dataframe operation separately on each dataframe and then append these individual dataframes to produce a new dataframe which has a length equal to the original dataframe.