Skip to content
Related Articles

Related Articles

Improve Article

How to name aggregate columns in PySpark DataFrame ?

  • Last Updated : 17 Jun, 2021
Geek Week

In this article, we are going to see how to name aggregate columns in the Pyspark dataframe.

We can do this by using alias after groupBy(). groupBy() is used to join two columns and it is used to aggregate the columns, alias is used to change the name of the new column which is formed by grouping data in columns.

Syntax: dataframe.groupBy(“column_name1”) .agg(aggregate_function(“column_name2”).alias(“new_column_name”))

Where 

  • dataframe is the input dataframe
  • aggregate function is used to group the column like sum(),avg(),count()
  • new_column_name is the name of the new aggregate dcolumn
  • alias is the keyword used to get the new column name

Creating Dataframe for demonstration:



Python3




# importing module
import pyspark
  
# importing sparksession from pyspark.sql module
from pyspark.sql import SparkSession
  
# creating sparksession and giving an app name
spark = SparkSession.builder.appName('sparkdf').getOrCreate()
  
# list  of employee data with 10 row values
data =[["1","sravan","IT",45000],
       ["2","ojaswi","IT",30000],
       ["3","bobby","business",45000],
       ["4","rohith","IT",45000],
       ["5","gnanesh","business",120000],
       ["6","siva nagulu","sales",23000],
       ["7","bhanu","sales",34000],
       ["8","sireesha","business",456798],
       ["9","ravi","IT",230000],
       ["10","devi","business",100000],
       ]
  
# specify column names
columns=['ID','NAME','sector','salary']
  
# creating a dataframe from the lists of data
dataframe = spark.createDataFrame(data,columns)
  
# display dataframe
dataframe.show()

Output:

Example 1: Python program to group the salary among different sectors and name as Employee_salary by sum aggregation. sum() function is available in pyspark.sql.functions package so we need to import it.

Python3




# importing sum function
from pyspark.sql.functions import sum
  
# group the salary among different sectors
# and name  as Employee_salary by sum aggregation
dataframe.groupBy(
  "sector").agg(sum("salary").alias("Employee_salary")).show()

Output:

Example 2: Python program to group the salary among different sectors and name  as Average_Employee_salary by average aggregation



Syntax:  avg(“column_name”)

Python3




# importing avg function
from pyspark.sql.functions import avg
  
# group the salary among different sectors
# and name  as Average_Employee_salary
# by average aggregation
dataframe.groupBy("sector"
.agg(avg(
  "salary").alias("Average_Employee_salary")).show()

Output:

Example 3: Group the salary among different sectors and name  as Total-People by count aggregation

Python3




# importing count function
from pyspark.sql.functions import count
  
# group the salary among different 
# sectors and name  as Total-People
# by count aggregation
dataframe.groupBy("sector"
.agg(count(
  "salary").alias("Total-People")).show()

Output:

 Attention geek! Strengthen your foundations with the Python Programming Foundation Course and learn the basics.  

To begin with, your interview preparations Enhance your Data Structures concepts with the Python DS Course. And to begin with your Machine Learning Journey, join the Machine Learning – Basic Level Course




My Personal Notes arrow_drop_up
Recommended Articles
Page :