How to name aggregate columns in PySpark DataFrame ?
In this article, we are going to see how to name aggregate columns in the Pyspark dataframe.
We can do this by using alias after groupBy(). groupBy() is used to join two columns and it is used to aggregate the columns, alias is used to change the name of the new column which is formed by grouping data in columns.
Syntax: dataframe.groupBy(“column_name1”) .agg(aggregate_function(“column_name2”).alias(“new_column_name”))
- dataframe is the input dataframe
- aggregate function is used to group the column like sum(),avg(),count()
- new_column_name is the name of the new aggregate dcolumn
- alias is the keyword used to get the new column name
Creating Dataframe for demonstration:
Example 1: Python program to group the salary among different sectors and name as Employee_salary by sum aggregation. sum() function is available in pyspark.sql.functions package so we need to import it.
Example 2: Python program to group the salary among different sectors and name as Average_Employee_salary by average aggregation
Example 3: Group the salary among different sectors and name as Total-People by count aggregation