Developers love serverless. They prefer to focus on code, deploying, and let the platform take care of the rest. Cloud Run, lets you run any stateless container on serverless. With Cloud Run, you can forget about infrastructure. It focuses on fast, automatic scaling that’s request-aware, so you can scale down to zero and only pay when it’s being used.
To explain this concept, let’s take an example. Here we are going to deploy a serverless microservice that transforms Word documents into PDFs. To perform this transformation, we will need OpenOffice. We are going to simply add OpenOffice inside our container, and then run it in a serverless environment.
Let’s jump into it. From the console, go to Cloud Run and open the Deployment page.
Select or paste the URL of the container image and click Create.
That’s all we needed to create a serverless container. No infrastructure to provision in advance, no YAML file, and no servers.
Cloud Run has imported our image, make sure that it started, and gathered a stable and secure HTTPS endpoint.
What we just deployed is a scalable microservice that transforms a document into a PDF. Let’s see it in action by giving it a doc to convert by uploading the same on the running microservice.
OpenOffice is not exactly a modern piece of software. It’s about a 15-year-old binary, and it’s about 200 megabytes. And we just took that binary and deployed it as a serverless workload with Cloud Run, because Cloud Run supports Docker containers. That means you can run any programming language you want or any software in a serverless way.
Let’s look at the code. We have a small piece of Python code that listens for incoming HTTP requests and calls OpenOffice to convert our document.
And we also have a very small Docker file. It starts by defining our base image.
FROM python:3-alpine ENV APP_HOME /app WORKDIR $APP_HOME RUN apk add libreoffice \ build-base \ # Install fonts msttcorefonts-installer fontconfig && \ update-ms-fonts && \ fc-cache -f RUN apk add --no-cache build-base libffi libffi-dev && pip install cffi RUN pip install Flask requests gevent COPY . $APP_HOME # prevent libreoffice from querying ::1 (ipv6 ::1 is rejected until istio 1.1) RUN mkdir -p /etc/cups && echo "ServerName 127.0.0.1" > /etc/cups/client.conf CMD ["python", "to-pdf.py"]
In our case, it’s the official Python-based image. Later, we installed OpenOffice and we specified our start command. Then, we packaged all of this into a container image using Cloud Build and deployed it to Cloud Run. On Cloud Run, our microservice can automatically be scaled to thousands of containers or instances in just a few seconds. We just took a legacy app and deployed it to a microservice environment without any change in code.
But sometimes you might want to have a little bit more control. For example, bigger CPU sizes, access to GPUs, more memory, or maybe have it running on a Kubernetes Engine cluster. With Cloud Run on GKE, it uses the exact same interface. We are going to deploy the exact same container image, this time in GKE.
And instead of a fully-managed region, we’re now picking our GKE cluster. We get the same Cloud Run developer experience as before. We get a stable and secure endpoint that automatically scales our microservice.
Behind the scenes, Cloud Run and Cloud Run on GKE are powered by Knative, an opensource project to run serverless workloads that launched last year. This means we can actually deploy the exact same microservice to any Kubernetes cluster running on Knative. Let’s take a look.
We exported the microservice into a file, service.yaml. Then, using the below command, we’ll deploy it to a managed Knative on another cloud provider.
kubectl apply - f service.yaml
We’ll enter the below command to retrieve the URL endpoint.
kubectl get ksvc
Now on another cloud provider. Let’s look into the service that’s running by entering the below command:
gcloud beta run services describe pdf service
If you’re familiar with Kubernetes, these APIs versions and fields may look familiar.
In this case, we’re not using Kubernetes. But since Cloud Run implements the Knative API, an extension of Kubernetes, this is an API object that looks like Kubernetes. Knative enables services to be run portably between environments without vendor lock-in. Cloud Run gives you everything you love about serverless. There are no servers to manage, you get to stay in the code, and there’s fast scale-up, and more importantly, scale down to zero. You get to pay nothing when there are no cycles being run. Use any binary or language, because it’s on the flexibility of containers. And it gives you access to the Google Cloud ecosystem and APIs. And you get a consistent experience wherever you want it– in a fully-managed environment or on GKE.
Attention reader! Don’t stop learning now. Get hold of all the important CS Theory concepts for SDE interviews with the CS Theory Course at a student-friendly price and become industry ready.