Open In App

Google Cloud Platform – Serverless Containers

Improve
Improve
Like Article
Like
Save
Share
Report

Developers love serverless. They prefer to focus on code, deploying, and let the platform take care of the rest. Cloud Run, lets you run any stateless container on serverless. With Cloud Run, you can forget about infrastructure. It focuses on fast, automatic scaling that’s request-aware, so you can scale down to zero and only pay when it’s being used.

To explain this concept, let’s take an example. Here we are going to deploy a serverless microservice that transforms Word documents into PDFs. To perform this transformation, we will need OpenOffice. We are going to simply add OpenOffice inside our container, and then run it in a serverless environment.

Let’s jump into it. From the console, go to Cloud Run and open the Deployment page.

Select or paste the URL of the container image and click Create.

That’s all we needed to create a serverless container. No infrastructure to provision in advance, no YAML file, and no servers.

Cloud Run has imported our image, make sure that it started, and gathered a stable and secure HTTPS endpoint.

What we just deployed is a scalable microservice that transforms a document into a PDF. Let’s see it in action by giving it a doc to convert by uploading the same on the running microservice.

OpenOffice is not exactly a modern piece of software. It’s about a 15-year-old binary, and it’s about 200 megabytes. And we just took that binary and deployed it as a serverless workload with Cloud Run, because Cloud Run supports Docker containers. That means you can run any programming language you want or any software in a serverless way.

Let’s look at the code. We have a small piece of Python code that listens for incoming HTTP requests and calls OpenOffice to convert our document.

Python3




# python program to convert
# docs into pdf
  
import os
import shutil
import requests
import tempfile
  
from gevent.pywsgi import WSGIServer
from flask import Flask, after_this_request, render_template, request, send_file
from subprocess import call
  
UPLOAD_FOLDER = '/tmp'
ALLOWED_EXTENSIONS = set(['doc', 'docx', 'xls', 'xlsx'])
  
app = Flask(__name__)
  
  
# Convert using Libre Office
def convert_file(output_dir, input_file):
    call('libreoffice --headless --convert-to pdf --outdir %s %s ' %
         (output_dir, input_file), shell=True)
  
  
def allowed_file(filename):
    return '.' in filename and \
           filename.rsplit('.', 1)[1].lower() in ALLOWED_EXTENSIONS
  
  
@app.route('/', methods=['GET', 'POST'])
def api():
    work_dir = tempfile.TemporaryDirectory()
    file_name = 'document'
    input_file_path = os.path.join(work_dir.name, file_name)
    # Libreoffice is creating files with the same name but .pdf extension
    output_file_path = os.path.join(work_dir.name, file_name + '.pdf')
  
    if request.method == 'POST':
        # check if the post request has the file part
        if 'file' not in request.files:
            return 'No file provided'
        file = request.files['file']
        if file.filename == '':
            return 'No file provided'
        if file and allowed_file(file.filename):
            file.save(input_file_path)
  
    if request.method == 'GET':
        url = request.args.get('url', type=str)
        if not url:
            return render_template('index.html')
        # Download from URL
        response = requests.get(url, stream=True)
        with open(input_file_path, 'wb') as file:
            shutil.copyfileobj(response.raw, file)
        del response
  
    convert_file(work_dir.name, input_file_path)
  
    @after_this_request
    def cleanup(response):
        work_dir.cleanup()
        return response
   
    return send_file(output_file_path, mimetype='application/pdf')
  
  
if __name__ == "__main__":
    http_server = WSGIServer(('', int(os.environ.get('PORT', 8080))), app)
    http_server.serve_forever()


And we also have a very small Docker file. It starts by defining our base image. 

FROM python:3-alpine
ENV APP_HOME /app
WORKDIR $APP_HOME
RUN apk add libreoffice \
    build-base \ 
    # Install fonts
    msttcorefonts-installer fontconfig && \
    update-ms-fonts && \
    fc-cache -f
RUN apk add --no-cache build-base libffi libffi-dev && pip install cffi
RUN pip install Flask requests gevent
COPY . $APP_HOME
# prevent libreoffice from querying ::1 (ipv6 ::1 is rejected until istio 1.1)
RUN mkdir -p /etc/cups && echo "ServerName 127.0.0.1" > /etc/cups/client.conf
CMD ["python", "to-pdf.py"]

In our case, it’s the official Python-based image. Later, we installed OpenOffice and we specified our start command. Then, we packaged all of this into a container image using Cloud Build and deployed it to Cloud Run. On Cloud Run, our microservice can automatically be scaled to thousands of containers or instances in just a few seconds. We just took a legacy app and deployed it to a microservice environment without any change in code.

But sometimes you might want to have a little bit more control. For example, bigger CPU sizes, access to GPUs, more memory, or maybe have it running on a Kubernetes Engine cluster. With Cloud Run on GKE, it uses the exact same interface. We are going to deploy the exact same container image, this time in GKE. 

And instead of a fully-managed region, we’re now picking our GKE cluster. We get the same Cloud Run developer experience as before. We get a stable and secure endpoint that automatically scales our microservice.

Behind the scenes, Cloud Run and Cloud Run on GKE are powered by Knative, an opensource project to run serverless workloads that launched last year. This means we can actually deploy the exact same microservice to any Kubernetes cluster running on Knative. Let’s take a look.

We exported the microservice into a file, service.yaml. Then, using the below command, we’ll deploy it to a managed Knative on another cloud provider.

 kubectl apply - f service.yaml

We’ll enter the below command to retrieve the URL endpoint. 

kubectl get ksvc

 Now on another cloud provider. Let’s look into the service that’s running by entering the below command:

gcloud beta run services describe pdf service

If you’re familiar with Kubernetes, these APIs versions and fields may look familiar.

 In this case, we’re not using Kubernetes. But since Cloud Run implements the Knative API, an extension of Kubernetes, this is an API object that looks like Kubernetes. Knative enables services to be run portably between environments without vendor lock-in. Cloud Run gives you everything you love about serverless. There are no servers to manage, you get to stay in the code, and there’s fast scale-up, and more importantly, scale down to zero. You get to pay nothing when there are no cycles being run. Use any binary or language, because it’s on the flexibility of containers. And it gives you access to the Google Cloud ecosystem and APIs. And you get a consistent experience wherever you want it– in a fully-managed environment or on GKE.



Last Updated : 04 Jan, 2021
Like Article
Save Article
Previous
Next
Share your thoughts in the comments
Similar Reads