How to create PySpark dataframe with schema ?
In this article, we will discuss how to create the dataframe with schema using PySpark. In simple words, the schema is the structure of a dataset or dataframe.
|SparkSession||The entry point to the Spark SQL.|
|SparkSession.builder()||It gives access to Builder API that we used to configure session|
|SparkSession.master(local)||It sets the Spark Master URL to connect to run locally.|
|SparkSession.appname()||Is sets the name for the application.|
|SparkSession.getOrCreate()||If there is no existing Spark Session then it creates a new one otherwise use the existing one.|
For creating the dataframe with schema we are using:
- data – list of values on which dataframe is created.
- schema – It’s the structure of dataset or list of column names.
where spark is the SparkSession object.
- In the below code we are creating a new Spark Session object named ‘spark’.
- Then we have created the data values and stored them in the variable named ‘data’ for creating the dataframe.
- Then we have defined the schema for the dataframe and stored it in the variable named as ‘schm’.
- Then we have created the dataframe by using createDataframe() function in which we have passed the data and the schema for the dataframe.
- As dataframe is created for visualizing we used show() function.
In the below code we are creating the dataframe by passing data and schema in the createDataframe() function directly.
Attention geek! Strengthen your foundations with the Python Programming Foundation Course and learn the basics.
To begin with, your interview preparations Enhance your Data Structures concepts with the Python DS Course. And to begin with your Machine Learning Journey, join the Machine Learning – Basic Level Course