Open In App
Related Articles

Docker compose tool to run multi container applications

Improve Article
Improve
Save Article
Save
Like Article
Like

The goal of this article is to show how to run multi-container applications using a single command. Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a configuration file (YAML file) to configure your docker containers. Then, with a single command, you create and start all the services (containers) from your configuration. Let me explain this taking an example. 

This article expects you to be familiar with docker and have some considerable experience in using it.

Let’s say we have a simple application having two components, flask app and a redis database. I will go by running the complete application with and without using docker-compose tool, which would give your major use of this compose tool.

Creating the project

Create directory gfg_docker_compose holds the our project

$ mkdir gfg_docker_compose

Move to that directory

$ cd gfg_docker_compose

Create the requirements.txt file

gfg_docker_compose/ $ touch requirements.txt

Copy it to the  requirements.txt

Python




flask
redis

Create file app.py will have the code for our flask app

gfg_docker_compose/ $ touch app.py

Copy the below code to app.py 
 

Python3




from flask import Flask, request, jsonify
from redis import Redis
 
# initializing a new flask app
app = Flask(__name__)
 
# initializing a new redis database
# Hostname will be same as the redis service name
# in the docker compose configuration
redis = Redis(host ="localhost", db = 0, socket_timeout = 5,
              charset ="utf-8", decode_responses = True)
 
# Our app has a single route allowing two methods POST and GET.
 
 
@app.route('/', methods =['POST', 'GET'])
def animals():
 
    if request.method == 'POST':
        # Take the name of the animal
        name = request.json['name']
        # push the name to the end of animals list in the redis db
        redis.rpush('animals', {'name': name})
        # return a success
        return jsonify({'status': 'success'})
 
    if request.method == 'GET':
        # return complete list of names from animals
        return jsonify(redis.lrange('animals', 0, -1))

Explanation:

We are simply accepting two methods GET and POST requests for `/` route. When ever a POST request is done with the name, the name is added at the end of the animals list. For GET request we will return the list of names from animals list.

Create the dockerfile

gfg_docker_compose/ $ touch dockerfile

Copy the below code to the dockerfile 

Python3




# pulling the base image
FROM python:3.7.0-alpine3.8
 
# Creating a folder and moving into it
WORKDIR /usr/src/app
 
# Copying the dependency list
COPY requirements.txt ./
 
# Installing the python dependencies
RUN pip install --no-cache-dir -r requirements.txt
 
# Copying the flask code into the container
COPY . .
 
ENV FLASK_APP=app.py
 
EXPOSE 5000
 
# Starting the server
CMD flask run --host=0.0.0.0

Explanation:

We will start with the base image python:3.7.0-alpine3.8. We will copy the requirements.txt file and install all our flask app dependencies. Then we would copy the app.py file in to the container and finally run the flask app. 

And now we ready with the docker application.

Without docker-compose tool

To start and use this application without compose tool would be tedious for a multi-container application as you need to remember the complete configuration and use whenever you run the application. Let’s see how it is normally without the compose tool

Now you will have a project tree as

gfg_docker_compose
--- app.py
--- requirements.txt
--- Dockerfile

Now will run and start our redis server container

gfg_docker_compose/ $  docker run --name=redis redis:4.0.11-alpine

redis server is started

So using that command we will pull redis:4.0.11-alpine image and run a redis container.  Now our redis has started so you should take it’s container IP address

gfg_docker_compose/ $ docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' redis

Extract redis container IP Address

this gives you a IP address which you need to put it to the host parameter in the app.py. 

Now the line looks like in app.py

redis = Redis(host="IPAddress", db=0, socket_timeout=5,
              charset="utf-8", decode_responses=True)

where IPAddress is the IP address you get from the redis container.

Build the flask application

gfg_docker_compose/ $  docker build -t gfg/flask-app .

Our gfg/flask-app image is successfully built

Wait for some time and the application image will be built

Now we will start our flask app container as well. 

Open a new terminal tab and run below command

gfg_docker_compose/ $  docker run -p 5000:5000 gfg/flask-app

Our flask app is started

So using that command we will pull the gfg/flask-app which we have built earlier and run our flask app container. Also, -p is used to map the port 5000 from container to the host.

Finally when you route to the flask app on a browser you should see something like this.

Our application is working

With docker-compose tool

Using docker-compose tool the setup process for multi-container docker applications will become fairly easy. Simply idea behind this is, we will write the complete container configuration in a YAML file called docker-compose.yml file and then with simple commands we can start and stop the application. This method will also help us to share our docker applications easily to other developers, by simply sharing the docker-compose file and the project.

create the docker-compose.yml file

gfg_docker_compose/ $  touch docker-compose.yml

Now the project tree would look like

gfg_docker_compose
--- app.py
--- requirements.txt
--- Dockerfile
--- docker-compose.yml

Now copy the below YAML code to docker-compose.yml file.

Python




version: '3'
 
services:
  app:
    build: .
    image: gfg/flask-app
    environment:
      - FLASK_ENV=development
    ports:
      - 5000:5000
 
  redis:
    image: redis:4.0.11-alpine

Explanation:

  • version: states the version of docker-compose to use, here we are using version 3
  • services: holds all our application services (container) configurations.
  • app: We have named our flask app as app service, feel free to give it any other name you want.
  • build: relative path to the Dockerfile
  • image: name of the final docker application image
  • environment: list of environment variables
  • ports: list of ports to be mapped from the container to the host machine
  • redis: name of our redis service
  • image: name of the image.

NOTE: Service names app and redis are also the hostname for the services(containers) we run because docker-compose automatically creates a network and adds our containers to that network so every container can recognize other containers by their service name as hostname within that network. So this is the reason we will keep the host parameter in the app.py file to redis itself.

Start the application

gfg_docker_compose/ $  docker-compose up --build

A successful docker-compose up output looks like this

–build is used to explicitly mention to build the images before starting the application.

You can see the application working as below

Our docker application is working

Stop the complete application

gfg_docker_compose/ $  docker-compose down

A successful docker-compose down looks like this

Using the docker-compose tool, we can make the multi-container docker application setup process much faster and easier than the usual way.


Last Updated : 31 Oct, 2022
Like Article
Save Article
Similar Reads
Related Tutorials