Skip to content
Related Articles

Related Articles

Improve Article
Save Article
Like Article

Create a SQL table from Pandas dataframe using SQLAlchemy

  • Last Updated : 26 Jan, 2022

In this article, we will discuss how to create a SQL table from Pandas dataframe using SQLAlchemy.

As the first steps establish a connection with your existing database, using the create_engine() function of SQLAlchemy.

Syntax:

from sqlalchemy import create_engine

engine = create_engine(dialect+driver://username:password@host:port/database)

Explanation:

  • dialect – Name of the DBMS
  • driver – Name of the DB API that moves information between SQLAlchemy and the database.
  • Username, Password – DB User credentials
  • host: port – Specify the type of host and port number.
  • Database – Database name

Example:

Python3




engine = create_engine(

The above example creates a Dialect object specific to PostgreSQL, and a Pool object which establishes a DBAPI connection at localhost:5432 when a connection request is received.

SQLAlchemy includes many Dialect implementations for the most common databases like Oracle, MS SQL, PostgreSQL, SQLite, MySQL, and so on. To load the dataframe to any database, SQLAlchemy provides a function called to_sql().

Syntax: pandas.DataFrame.to_sql(table_name, engine_name, if_exists, schema, index, chunksize, dtype)

Explanation: 

  • table_name – Name in which the table has to be stored
  • engine_name – Name of the engine which is connected to the database
  • if_exists – By default, pandas throws an error if the table_name already exists. Use ‘REPLACE’ to replace this dataset with the old one or “APPEND” to add the data to the existing table.
  • index – (bool), Adds index column to the table that identifies each row uniquely.

For this example, we can use an inbuilt, in-memory-only SQLite database, which is one of the easiest ways to test things, but then the procedure is just the same for all the other databases supported by SQLAlchemy. You can download the sample dataset here

Let us first Import the necessary dataset. Now, let’s Establish the connection with an in-memory-only SQLite database and make it interactable to python using pysqlite driver. Next, we shall load the dataframe to be pushed to our SQLite database using the to_sql() function as shown.

Python3




# import the necessary packages
import pandas
from sqlalchemy import create_engine
  
# Create the engine to connect to the inbuilt 
# sqllite database
engine = create_engine("sqlite+pysqlite:///:memory:")
  
# Read data from CSV which will be
# loaded as a dataframe object
data = pandas.read_csv('superstore.csv')
  
# print the sample of a dataframe
data.head()
  
# Write data into the table in sqllite database
data.to_sql('loan_data', engine)

Output:

output

In order to check whether the dataframe is uploaded as a table, we can query the table using SQLAlchemy as shown below,

Python3




from sqlalchemy import text
  
# establish the connection with the engine object
with engine.connect() as conn:
    
    # let's select the column credit_history
    # from the loan data table
    result = conn.execute(text("SELECT Credit_History FROM loan_data"))
      
    # print the result
    for row in result:
        print(row.Credit_History)

Output:


My Personal Notes arrow_drop_up
Recommended Articles
Page :

Start Your Coding Journey Now!