PySpark partitionBy() method
PySpark partitionBy() is used to partition based on column values while writing DataFrame to Disk/File system. When you write DataFrame to Disk by calling partitionBy() Pyspark splits the records based on the partition column and stores each partition data into a sub-directory.
PySpark Partition is a way to split a large dataset into smaller datasets based on one or more partition keys. You can also create a partition on multiple columns using partitionBy(), just pass columns you want to partition as an argument to this method.
Syntax: partitionBy(self, *cols)
Let’s Create a DataFrame by reading a CSV file. You can find the dataset at this link Cricket_data_set_odi.csv
Create dataframe for demonstration:
Python3
import pyspark
from pyspark.sql import SparkSession
from pyspark.context import SparkContext
spark = SparkSession.builder.appName( 'sparkdf' ).getOrCreate()
df = spark.read.option(
"header" , True ).csv( "Cricket_data_set_odi.csv" )
df.printSchema()
|
Output:
PySpark partitionBy() with One column:
From the above DataFrame, we will be use Team as a partition key for our examples below:
Python3
df.write.option( "header" , True ) \
.partitionBy( "Team" ) \
.mode( "overwrite" ) \
.csv( "Team" )
cd Team
ls
|
Output:
PySpark partitionBy() with Multiple Columns:
You can also create partitions on multiple columns using PySpark partitionBy(). Just pass columns you want to partition as arguments to this method.
From the above DataFrame, we are using Team and Speciality as a partition key for our examples below.
Python3
df.write.option( "header" , True ) \
.partitionBy( "Team" , "Speciality" ) \
.mode( "overwrite" ) \
.csv( "Team-Speciality" )
cd Team = Ind
cd Team - Speciality
cd Team = Ind
ls
|
Output:
Control Number of Records per Partition File:
Use the option maxRecordsPerFile if you want to control the number of records for each partition. This is especially helpful when your data is skewed (some partitions with very few records and other partitions with high numbers of records).
Python3
df.write.option( "header" , True ) \
.option( "maxRecordsPerFile" , 2 ) \
.partitionBy( "Team" ) \
.mode( "overwrite" ) \
.csv( "Team" )
cd Team
ls
|
Output:
Last Updated :
30 Jun, 2021
Like Article
Save Article
Share your thoughts in the comments
Please Login to comment...