1. Introduction
Modern cloud-native apps are made of multiple services interacting with each other to form an application. These services are called microservices. You can deploy these micoservices individually. One such approach is to use one Dockerfile for each service. But such type of deployment and management of such services can be difficult. This is where Docker Compose can be helpful. Compose allows you to declaratively describe everything you need to deploy services in a configuration file.
You can keep the configuration file in verson control.
2. Brief history of Docker Compose
Fig, a project created by a startup called Orchard, was introduced in 2014 as a way to manage multi-container Docker applications. Fig allowed users to define their multi-container environment in a simple YAML file, which made managing linked containers easier.
The idea behind Fig was to give developers a streamlined way to define environments in one file and spin up all containers simultaneously. Docker recognized the importance of Fig’s functionality for their growing ecosystem. In October 2014, Docker acquired Orchard and subsequently renamed Fig as Docker Compose.
Docker Compose was integrated into the official Docker ecosystem and provided developers with a standardized way to manage multi-container applications.
3. Installation
Docker Compose is a part of Docker engine and you need not to install separately. Try the following command:
> docker compose version Docker Compose version v2.20.2-desktop.1
If you get the version, then you are good to go.
4. Compose configure file
Docker Compose uses YAML file to define microservice deployments. The default name for this file is compose.yaml
. You can also use compose.yml
. If you want to define you custom configuration file, you can do by using -f
flag. Here is an example of Docker Compose configuration file:
version: "3" services: # Node.js application app: build: . ports: - "3000:3000" depends_on: - mongo environment: MONGO_URL: mongodb://mongo:27017/mydatabase volumes: - .:/usr/src/app - /usr/src/app/node_modules networks: - app-network # MongoDB service mongo: image: mongo:latest ports: - "27017:27017" volumes: - mongo-data:/data/db networks: - app-network volumes: mongo-data: networks: app-network: driver: bridge # Define a named volume for MongoDB data persistence volumes: mongo-data: # Custom network definition networks: app-network: driver: bridge
Let’s understand the docker compose file.
1. Version
This specifies the Compose file format version. Version 3 is commonly used for modern Docker Compose configurations and provides compatibility with Docker Swarm as well. Version 3 supports features like custom networks, volumes, and service dependencies.
2. Services
The services section defines each containerized service in your application (i.e., the Node.js app and MongoDB database). Each service runs in its own container.
build:
This tells Docker Compose to build the image for the Node.js application using the Dockerfile located in the current directory (.).
ports:
The 3000:3000 mapping exposes port 3000 of the Node.js container to port 3000 on your host machine. This makes the app accessible through http://localhost:3000
.
depends_on:
This ensures that the mongo service (MongoDB) is started before the Node.js app. It ensures the app waits for MongoDB to be up and running, preventing potential connection issues during startup.
networks:
The app service is attached to a custom network called app-network, enabling communication with other services (like MongoDB) within that network.
volumes:
- .:/usr/src/app
- /usr/src/app/node_modules
Line 1: – .:/usr/src/app
What it does: Mounts the entire current directory (.) from your host into /usr/src/app
inside the container.
Problem: This will overwrite everything inside /usr/src/app
, including the node_modules
folder that contains your dependencies installed during the npm install
step in the Dockerfile.
Line 2: – /usr/src/app/node_modules
What it does: Creates a Docker-managed anonymous volume specifically for the node_modules
directory inside the container. This ensures that the contents of /usr/src/app/node_modules
inside the container are not overwritten by the host directory when you mount .. The node_modules
directory remains managed by Docker and preserves the dependencies installed inside the container.
Environment variables are passed into the container, such as the MongoDB URL. The MONGO_URL
is set to mongodb://mongo:27017/mydatabase
, where mongo refers to the MongoDB service (since both services are on the same custom network).
mongodb service:
image:
This uses the official MongoDB image from Docker Hub (mongo:latest), so there is no need to build a custom image for MongoDB.
ports:
The 27017:27017
mapping exposes MongoDB’s default port 27017 to your host machine, allowing external access to MongoDB if needed.
volumes:
A volume mongo-data:/data/db
is used to persist MongoDB data. This means that even if the container is stopped or removed, the data stored in MongoDB will persist in the volume.
networks:
The mongo service is also connected to the app-network, allowing the Node.js application to communicate with MongoDB via the service name mongo.
Volumes:
The volumes section defines named volumes used by the services. In this case, mongo-data is used for MongoDB to store its database files. Named volumes persist data across container restarts, so you don’t lose data if you bring the containers down.
Networks:
networks: app-network: driver: bridge
The networks section defines a custom network called app-network. Both the Node.js app and MongoDB services are connected to this network.
The network uses the bridge driver, which is the default networking driver for Docker when working with a single host. This allows containers on the same host to communicate with each other via container names (e.g., the app can reach MongoDB by using mongo as the hostname).
bridge network is appropriate for a single-host setup (e.g., local development). If you were using a multi-host environment (such as Docker Swarm), you might opt for the overlay network driver, which enables communication across multiple Docker hosts.
Communication and Networking:
The Node.js application can connect to MongoDB by using the hostname mongo (which is the service name in the Compose file). This is possible because both services are connected to the app-network.
Containers on the same network can communicate with each other by referencing their service names (mongo and app).
Execution of the File:
Once the docker-compose.yml
file is set up like this, you can use Docker Compose to:
docker-compose up --build
This command will:
- Build the Node.js application image (if necessary).
- Start both the app and mongo containers.
- Automatically handle the networking between them.
docker compose up
is the most common way to bring up a Compose app. It builds or pulls all required images, creates all required networks and volumes, and starts all required containers.
Typically, you’ll use the --detach
flag to run the app in the background, as demonstrated below. In this case, however, you can ran it in the foreground and used & to regain control of the terminal. This causes Compose to display all output messages directly in the terminal window.
Compose names the newly-built image as a combination of the project name and the resource name as specified in the Compose file. The project name is the name of the directory with the Compose file in it. All resources created by Compose will follow this naming convention.
5. Managing applications with Docker Compose
To bring down the application use the docker compose down
.
> docker compose down [+] Running 3/3 ✔ Container my-node-app-app-1 Removed 0.8s ✔ Container my-node-app-mongo-1 Removed 0.8s ✔ Network my-node-app_app-network Removed 0.2s
volumes are not deleted by default. This is because volumes are intended to be long-term persistent data stores and their lifecycles are entirely decoupled from application lifecycles. Running a docker volume ls will prove the volume is still present on the system. Any images that were built or pulled as part of the docker-compose
up operation will also remain on the system.
Adding the --rmi
all flag to the docker compose down command will delete all images built or pulled when starting the app.
To bring the application up again, but in the background use --detatch
flag.
> docker compose up --detach
Use docker compose top to list the processes running inside of each service (container).
> docker compose top my-node-app-app-1 UID PID PPID C STIME TTY TIME CMD root 5064 5043 2 18:18 ? 00:00:00 npm root 5126 5064 0 18:18 ? 00:00:00 sh -c node app.js root 5127 5126 3 18:18 ? 00:00:00 node app.js my-node-app-mongo-1 UID PID PPID C STIME TTY TIME CMD 999 4985 4964 10 18:18 ? 00:00:02 mongod --bind_ip_all
5.1 Stopping application
To stop the application without removing resources use docker compose stop
.
> docker compose stop [+] Stopping 2/2 ✔ Container my-node-app-app-1 Stopped 0.3s ✔ Container my-node-app-mongo-1 Stopped 0.7s
You can remove a stopped Compose application using docker compose rm. This command will remove the containers and networks, but it will not delete the volumes or images. Additionally, it won’t affect the application source code in your project’s build context directory (like app.py
, Dockerfile
, requirements.txt
, and compose.yaml
).
Run the following command to stop and delete the app with a single command. It will also delete any volumes and images used to start the app.
> docker-compose down --volumes --rmi all
It’s important to note that Compose creates networks and volumes before launching services. This approach makes sense because networks and volumes are foundational infrastructure components used by services (containers).
6. Conclusion
In this tutorial, we explored Docker Compose, a powerful tool that simplifies managing multi-container Docker applications. By using a single docker-compose.yml
file, you can define, configure, and launch multiple services together. We learned how to set up and configure services, run them with simple commands, and manage networking and volumes.
Docker Compose not only makes application orchestration easier but also streamlines development workflows by allowing you to define the entire stack in code. Whether you’re building simple microservices or complex applications, Docker Compose saves time and ensures consistency across environments. With this foundation, you can now explore more advanced features, such as scaling services, integrating with CI/CD pipelines, or using Compose in production environments.