Select specific column of PySpark dataframe with its position
In this article, we will discuss how to select a specific column by using its position from a pyspark dataframe in Python. For this, we will use dataframe.columns() method inside dataframe.select() method.
Syntax:
dataframe.select(dataframe.columns[column_number]).show()
where,
- dataframe is the dataframe name
- dataframe.columns[]: is the method which can take column number as an input and select those column
- show() function is used to display the selected column
Let’s create a sample dataframe.
Python3
import pyspark
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName( 'sparkdf' ).getOrCreate()
data = [[ "1" , "sravan" , "vignan" ], [ "2" , "ojaswi" , "vvit" ],
[ "3" , "rohith" , "vvit" ], [ "4" , "sridevi" , "vignan" ],
[ "1" , "sravan" , "vignan" ], [ "5" , "gnanesh" , "iit" ]]
columns = [ 'student ID' , 'student NAME' , 'college' ]
dataframe = spark.createDataFrame(data, columns)
print ( "Actual data in dataframe" )
dataframe.show()
|
Output:
Selecting a column by column number
Python3
dataframe.select(dataframe.columns[ 1 ]).show()
|
Output:
We can also select multiple columns with the same function with slice operator(:). It can access up to n columns.
Syntax: dataframe.select(dataframe.columns[column_start:column_end]).show()
Python3
dataframe.select(dataframe.columns[ 1 : 3 ]).show()
|
Output:
Last Updated :
08 Oct, 2021
Like Article
Save Article
Share your thoughts in the comments
Please Login to comment...