Open In App

Deploying ML Models as API using FastAPI

Improve
Improve
Improve
Like Article
Like
Save Article
Save
Share
Report issue
Report

Deployment usually the last step in any Data Science Project Pipeline, to be able to integrate your ML/DL model to a web application is quite an important task. There are many popular frameworks that can be used to do this task like Flask and Django. Django is usually used for large scale application and takes quite a bit time to set up that while Flask is usually your go-to for quickly deploying your model on a web app. Apart from the two mentioned there is another framework that is becoming quite popular, so much so that companies like Netflix and Uber are using it, and that framework is FastAPI. So let’s understand what’s making FastAPI so popular and how you can use it to deploy ML models as an API using it.

FastAPI vs Flask:

  • FastAPI is way faster than Flask, not just that it’s also one of the fastest python modules out there.
  • Unlike Flask, FastAPI provides an easier implementation for Data Validation to define the specific data type of the data you send.
  • Automatic Docs to call and test your API(Swagger UI and Redoc).
  • FastAPI comes with built-in support for Asyncio, GraphQL and Websockets.

Installing FastAPI:

Installing FastAPI is the same as any other python module, but along with FastAPI you also need to install uvicorn to work as a server. You can install both of them using the following command:-

pip install fastapi uvicorn

Creating Basic API using FastAPI:

Before creating our ML model lets start by creating a basic API that’s going to return us a simple message.

Python3




# Importing Necessary modules
from fastapi import FastAPI
import uvicorn
 
# Declaring our FastAPI instance
app = FastAPI()
 
# Defining path operation for root endpoint
@app.get('/')
def main():
    return {'message': 'Welcome to GeeksforGeeks!'}
 
# Defining path operation for /name endpoint
@app.get('/{name}')
def hello_name(name : str):
    # Defining a function that takes only string as input and output the
    # following message.
    return {'message': f'Welcome to GeeksforGeeks!, {name}'}


Testing Our API:

The above code defined all the path operation in the file that we’ll name as basic-app.py.  Now to run this file we’ll open the terminal in our directory and write the following command:-

uvicorn basic-app:app --reload

Now the above command follows the following format:-

  • basic-app refers to the name of the file we created our API in.
  • app refers to the FastAPI instance we declared in the file.
  • –reload tells to restart the server every time we reload.

Now after you run this command and go to http://127.0.0.1:8000/ you’ll see the following in your browser.

You see this message because you told FastAPI to return this as Response when root path is called. One thing to note is that our message was a Python Dictionary but it was converted to JSON automatically. Now along with this you also have another endpoint in which you can get a custom string to be displayed in the message, to call that go to http://127.0.0.1:8000/herumb, here following message will be displayed in the browser.

Interactive API docs:

Now to get the above result we had to manually call each endpoint but FastAPI comes with Interactive API docs which can access by adding /docs in your path. To access docs for our API we’ll go to http://127.0.0.1:8000/docs. Here you’ll get the following page where you can test the endpoints of your API by seeing the output they’ll give for the corresponding inputs if any. You should see the following page for our API.

Deploying our ML Model:

Building Our Model:

For this tutorial, we are going to use GuassianNB as our model and iris dataset to train our model on. To build and train our model we use the following code:

from sklearn.datasets import load_iris
from sklearn.naive_bayes import GaussianNB

# Loading Iris Dataset
iris = load_iris()

# Getting features and targets from the dataset
X = iris.data
Y = iris.target

# Fitting our Model on the dataset
clf = GaussianNB()
clf.fit(X,Y)

Now that we have our model ready we need to define the format of the data we are going to provide to our model to make the predictions. This step is import because our model works on numerical data, and we don’t want to feed the data of any other type to our model, in order to do this we need to validate that the data we receive follows that norm. 

The Request Body:

The data sent from the client side to the API is called a request body. The data sent from API to the client is called a response body. 

To define our request body we’ll use BaseModel ,in pydantic module, and define the format of the data we’ll send to the API. To define our request body, we’ll create a class that inherits BaseModel and define the features as the attributes of that class along with their type hints. What pydantic does is that it defines these type hints during runtime and generates an error when data is invalid. So let’s create our request_body class:-

from pydantic import BaseModel

class request_body(BaseModel):
    sepal_length : float
    sepal_width : float
    petal_length : float
    petal_width : float

The Endpoint:

Now that we have a request body all that’s left to do is to add an endpoint that’ll predict the class and return it as a response :

@app.post('/predict')
def predict(data : request_body):
    test_data = [[
            data.sepal_length, 
            data.sepal_width, 
            data.petal_length, 
            data.petal_width
    ]]
    class_idx = clf.predict(test_data)[0]
    return { 'class' : iris.target_names[class_idx]}

And there we have our ML model deployed as an API. Now all that’s left to do is test it out.

Testing our API:

To test our API we’ll be using Swagger UI now to access that you’ll just need to add /docs at the end of your path. So go to http://127.0.0.1:8000/docs. And you should see the following output:

Now click on the Try it Out button and enter the data you want the prediction for:

After you’ve entered all the values click on Execute, after this you can see your output under the responses section:

And as you can see we got our class as the response. And with that we have successfully deployed our ML model as an API using FastAPI.

Python3




from fastapi import FastAPI
import uvicorn
from sklearn.datasets import load_iris
from sklearn.naive_bayes import GaussianNB
from pydantic import BaseModel
 
# Creating FastAPI instance
app = FastAPI()
 
# Creating class to define the request body
# and the type hints of each attribute
class request_body(BaseModel):
    sepal_length : float
    sepal_width : float
    petal_length : float
    petal_width : float
 
# Loading Iris Dataset
iris = load_iris()
 
# Getting our Features and Targets
X = iris.data
Y = iris.target
 
# Creating and Fitting our Model
clf = GaussianNB()
clf.fit(X,Y)
 
# Creating an Endpoint to receive the data
# to make prediction on.
@app.post('/predict')
def predict(data : request_body):
    # Making the data in a form suitable for prediction
    test_data = [[
            data.sepal_length,
            data.sepal_width,
            data.petal_length,
            data.petal_width
    ]]
     
    # Predicting the Class
    class_idx = clf.predict(test_data)[0]
     
    # Return the Result
    return { 'class' : iris.target_names[class_idx]}




Last Updated : 16 Sep, 2021
Like Article
Save Article
Previous
Next
Share your thoughts in the comments
Similar Reads