Bonus docker-compose- Container Orchestration Tool for Single-Host Environments #
Hello, I am Chrono.
We have reached this point in our course, and by now you should have a good understanding of Kubernetes.
As the operating system of the cloud-native era, Kubernetes originated from Docker but goes beyond Docker. It controls hundreds or even thousands of computing nodes through its master/node architecture, and uses YAML language to define various API objects to orchestrate and schedule containers, achieving management of modern applications.
However, have you ever felt that there is something missing between Docker and Kubernetes?
While Kubernetes is indeed a powerful container orchestration platform, its powerful features come with an increase in complexity and cost. Not to mention the dozens of API objects with different uses, just getting Kubernetes up and running to set up a small cluster requires a considerable amount of effort. However, sometimes, all we want to do is quickly start a group of containers to perform simple development and testing tasks, without incurring the operational cost of components like apiserver, scheduler, and etcd in Kubernetes.
Clearly, in these simple task scenarios, Kubernetes can be a bit “heavy”. Even the “toy-like” minikube or kind have relatively high requirements for the computer and consume a significant amount of computational resources, which can be seen as “overkill”.
Is there a tool that is lightweight and easy to use like Docker, while also having container orchestration capabilities like Kubernetes?
Today, I will introduce docker-compose, which precisely meets the aforementioned requirements. It is a lightweight container orchestration tool for single host environments and fills the gap between Docker and Kubernetes.
What is docker-compose #
Let’s start with the birth of Docker.
After Docker popularized container technology, numerous extensions and enhancements emerged in the Docker ecosystem. Among them, there was a notable project called “Fig”.
Fig introduced the concept of “container orchestration” to Docker. It used YAML to define container startup parameters, the order of execution, and dependencies. This eliminated the need for users to deal with the lengthy Docker command-line interface and showcased the power of a “declarative” approach.
Docker soon realized the value of this small tool called Fig. In July 2014, they acquired it and integrated it into Docker, renaming it “docker-compose”.
From this brief history, we can see that although both docker-compose and Kubernetes are container orchestration technologies that use YAML, they have completely different backgrounds. Docker-compose follows Docker’s technical path, so it’s not surprising that there are differences in design concepts and usage methods.
The role of docker-compose itself is to manage and run multiple Docker containers. Obviously, it doesn’t have the same grand goals as Kubernetes. It is designed to make it easier for users to use Docker. Therefore, its learning difficulty is relatively low, and it is easy to get started. Many concepts directly correspond to Docker commands.
However, this can sometimes be confusing. After all, docker-compose and Kubernetes both belong to the container orchestration field, and inconsistent usage can lead to cognitive conflicts and confusion. Considering this, we should find a balance when learning docker-compose. It should be sufficient for our needs without delving too deep. Otherwise, it may have a negative impact on learning Kubernetes.
How to Use Docker Compose #
The installation of Docker Compose is straightforward. It provides various binary executable files on GitHub (https://github.com/docker/compose) that can be downloaded directly and support operating systems such as Windows, macOS, Linux, as well as hardware architectures like x86_64 and arm64.
Here is an example of the shell command for installing it on Linux, using the latest version 2.6.1:
# intel x86_64
sudo curl -SL https://github.com/docker/compose/releases/download/v2.6.1/docker-compose-linux-x86_64 \
-o /usr/local/bin/docker-compose
# apple m1
sudo curl -SL https://github.com/docker/compose/releases/download/v2.6.1/docker-compose-linux-aarch64 \
-o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
sudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
Once the installation is complete, let’s check the version number using the command docker-compose version
, which has the same usage as docker version
:
docker-compose version
Next, we need to write a YAML file to manage Docker containers. Let’s use the private image repository mentioned in [Lesson 7] as an example.
The core concept of managing containers in Docker Compose is “service”. Note that although it shares a similar name with Service
in Kubernetes, it is a completely different entity. In Docker Compose, “service” refers to a containerized application, typically a background service. YAML is used to define the parameters of these containers and their relationships.
If we insist on making a comparison with Kubernetes, the API object that is most similar to “service” would be the container within a Pod, as they both manage container runtime. However, Docker Compose’s “service” also incorporates some features from Service and Deployment.
Below is the YAML file for the private image repository called Registry, with the key field being “services”. I have also listed the corresponding Docker commands:
docker run -d -p 5000:5000 registry
services:
registry:
image: registry
container_name: registry
restart: always
ports:
- 5000:5000
If we compare it to Kubernetes, we can see that it closely resembles Pod definitions. “services” can be considered as Pods, and the “service” within it is equivalent to “spec.containers”:
apiVersion: v1
kind: Pod
metadata:
name: ngx-pod
spec:
restartPolicy: Always
containers:
- image: nginx:alpine
name: ngx
ports:
- containerPort: 80
For example, declaring the image with image
and the ports with ports
is easy to understand. The only difference lies in the usage, where Docker syntax for port mapping is still employed.
Since the field definitions of Docker Compose are thoroughly explained on their official website (https://docs.docker.com/compose/compose-file/), I won’t delve into further explanation here. You can refer to it on your own.
It is worth reminding that in Docker Compose, each “service” has its unique name. It also serves as the unique network identifier for the container, somewhat similar to the role of Service
domain names in Kubernetes.
Great! Now we can start the application using the command docker-compose up -d
. Additionally, we need to use the -f
parameter to specify the YAML file, similar to kubectl apply
:
docker-compose -f reg-compose.yml up -d
Because Docker Compose ultimately calls Docker in the underlying layer, the containers it starts can also be seen with docker ps
:
However, by using docker-compose ps
, we can see more information:
docker-compose -f reg-compose.yml ps
Next, let’s change the tag of the Nginx image and upload it to the private repository for testing:
docker tag nginx:alpine 127.0.0.1:5000/nginx:v1
docker push 127.0.0.1:5000/nginx:v1
To examine the tag list, showing that the upload was successful, we can use curl:
curl 127.1:5000/v2/nginx/tags/list
To stop the application, we need to use the command docker-compose down
:
docker-compose -f reg-compose.yml down
With this example, we have successfully transformed the “imperative” Docker operations into “declarative” Docker Compose operations. The usage is very similar to Kubernetes, but without the high running costs associated with Kubernetes. In a single-machine environment, it can be considered the most suitable solution.
Building a WordPress Website Using docker-compose #
However, a private image repository Registry only contains a single container, which doesn’t demonstrate the advantages of using docker-compose for container orchestration. Let’s build a WordPress website using docker-compose to further explore its benefits.
The architecture diagram is the same as in Lesson 7:
The first step is to define the MariaDB database. The format for environment variables is similar to Kubernetes ConfigMap, but the field used here is environment
, and the variables are defined directly without any additional steps:
services:
mariadb:
image: mariadb:10
container_name: mariadb
restart: always
environment:
MARIADB_DATABASE: db
MARIADB_USER: wp
MARIADB_PASSWORD: 123
MARIADB_ROOT_PASSWORD: 123
Comparing this to the Docker command used to start MariaDB in Lesson 7, we can see that the docker-compose YAML file and the command line are very similar and can be almost directly used:
docker run -d --rm \
--env MARIADB_DATABASE=db \
--env MARIADB_USER=wp \
--env MARIADB_PASSWORD=123 \
--env MARIADB_ROOT_PASSWORD=123 \
mariadb:10
The second step is to define the WordPress website, which also uses environment
to set environment variables:
services:
...
wordpress:
image: wordpress:5
container_name: wordpress
restart: always
environment:
WORDPRESS_DB_HOST: mariadb # Pay attention to this, it refers to the network label of the database
WORDPRESS_DB_USER: wp
WORDPRESS_DB_PASSWORD: 123
WORDPRESS_DB_NAME: db
depends_on:
- mariadb
However, because docker-compose automatically uses the name of the MariaDB container as the network label, there is no need to manually specify the IP address when connecting to the database (in the WORDPRESS_DB_HOST
field). Just use the name of the service mariadb
. This is a convenient feature of docker-compose compared to the Docker command and is similar to the domain name mechanism of Kubernetes.
Another important note in the WordPress definition is the depends_on
field, which is used to set the dependency relationship between containers and specify the order in which containers start. This is a very convenient feature when orchestrating applications composed of multiple containers.
The third step is to define the Nginx reverse proxy. Unfortunately, docker-compose does not have concepts like ConfigMap or Secret. In order to load the configuration, an external file must be used, and it cannot be integrated directly into the YAML.
The Nginx configuration file is similar to the one used in Lesson 7. Similarly, in the proxy_pass
directive, there is no need to write the IP address, just use the name of the WordPress service:
server {
listen 80;
default_type text/html;
location / {
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_pass http://wordpress; # Pay attention to this, it refers to the network label of the website
}
}
Then we can define Nginx in the YAML file, using the volumes
field to load the configuration file. This syntax is the same as Kubernetes, but the format inside is in Docker style:
services:
...
nginx:
image: nginx:alpine
container_name: nginx
hostname: nginx
restart: always
ports:
- 80:80
volumes:
- ./wp.conf:/etc/nginx/conf.d/default.conf
depends_on:
- wordpress
At this point, all three services are defined. To start the website, use docker-compose up -d
, and remember to specify the YAML file using the -f
parameter:
docker-compose -f wp-compose.yml up -d
After starting, use docker-compose ps
to check the status:
To verify whether the network labels of these containers are working correctly, you can use docker-compose exec
to enter the container:
docker-compose -f wp-compose.yml exec -it nginx sh
From the screenshot, you can see that we pinged both the mariadb
and wordpress
services, and the network is functional. However, the IP address range used is “172.22.0.0/16”, which is different from the default “172.17.0.0/16” of Docker.
Lastly, open a browser and enter either “127.0.0.1” on your local machine or the IP address of your virtual machine (e.g., “http://192.168.10.208”). You should see the familiar WordPress interface:
Summary #
Alright, today we temporarily left Kubernetes and took a look back at the container orchestration tool docker-compose in the Docker world.
Compared to Kubernetes, docker-compose has its own limitations, such as being only suitable for single-machine use, having relatively simple orchestration functions, and lacking operational monitoring capabilities, etc. However, it also has advantages: it is lightweight, requires low hardware resources, and can run as long as Docker is available.
So, although Kubernetes has become the dominant player in the field of container orchestration, docker-compose still has its own space for survival. There are many projects on GitHub that provide docker-compose YAML files for quick prototyping or testing environments, and one typical example is CNCF Harbor.
For our daily work, docker-compose is also very useful. If it is a simple application with only a few containers, running it with Kubernetes can feel like “using a sledgehammer to crack a nut,” and using Docker commands or shell scripts is not very convenient. This is where docker-compose comes in, it allows us to completely get rid of the “imperative” and fully use the “declarative” approach to operate containers.
Let me briefly summarize today’s content:
- docker-compose originated from Fig and is a tool specifically designed for orchestrating Docker containers.
- docker-compose also uses YAML to describe containers, but its syntax and semantics are closer to Docker command line.
- The key concept in docker-compose YAML is “service,” which represents a containerized application.
- docker-compose commands are similar to Docker, with commonly used ones being
up
,ps
,down
, used to start, view, and stop applications.
In addition, docker-compose has many other useful features, such as volumes, custom networks, privileged processes, etc. If you are interested, you can learn more from the official documentation.
Feel free to leave a comment and share your thoughts on learning. We will return to the main topic in the next class. See you in the next class.