06Deployment with Docker in Jenkins Key Points #
Basic Docker Operations #
In the previous practical chapter, we introduced how to deploy services into containers. Some readers may not be familiar with Docker, and most of the practical examples in the following articles are centered around containers. Therefore, it is necessary to learn the basic operations of Docker before introducing the practical content. Of course, if you are familiar with Docker operations, you can skip this section.
Docker has three core components: images, containers, and image repositories. The operations of using Docker services are generally based on these three core components. In the previous “Basic Tools Installation” chapter, we have already introduced enterprise-level image repositories. This section mainly introduces image operations and container operations. If you are already a Docker expert, you can also skip this chapter.
Basic operations of Docker images #
This section mainly covers the following topics:
- Get an image
- List images
- Remove an image
- Build an image
Getting Images #
The command to get an image from the Docker official registry is docker pull
. The format of the command is:
docker pull [option] [Docker Registry address]/<repository name>:<tag>
You can check the specific parameter options by using the docker pull --help
command. Now, let me explain the format of the image address and name.
The format of the Docker Registry address is usually:
<domain/IP>[:port]
The default address is the Docker Hub address.
As for the repository name, as shown in the syntax above, it is a two-part name:
<username>/<image name>
For Docker Hub, if a username is not provided, it defaults to “library,” which refers to the official images. For example:
$ docker pull ubuntu:14.04
14.04: Pulling from library/ubuntu
......
Digest: sha256:147913621d9cdea08853f6ba9116c2e27a3ceffecf3b492983ae97c3d643fbbe
Status: Downloaded newer image for ubuntu:14.04
In the above example, the image address is not specified, so it will default to the “library” user under the Docker Hub and get the image with the tag 14.04 from the official library/ubuntu
repository.
If you want to get an image from another image repository, such as Aliyun, you can use:
docker pull registry.cn-beijing.aliyuncs.com/daimler-jenkins/jenkins-slave-java
If you want to search for an official image, you can use the following command:
docker search <keyword>
This command will search for images based on the provided keyword. For example, if you want to search for “jenkins” images, it will list the available jenkins images, as shown below:
[root@glusterfs-160 ~]# docker search jenkins
NAME DESCRIPTION STARS OFFICIAL AUTOMATED
jenkins Official Jenkins Docker image 4642 [OK]
jenkins/jenkins The leading open source automation server 1904
jenkinsci/blueocean https://jenkins.io/projects/blueocean 490
......
You can also search for the desired image by entering the keyword on the official hub.docker.com
website. I won’t demonstrate it here, but feel free to try it out if you’re interested.
List Images #
To list the downloaded images, you can use the docker images
command.
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
redis latest 5f515359c7f8 5 days ago 183MB
nginx latest 05a60462f8ba 5 days ago 181MB
The above list includes the repository name, tag, image ID, creation time, and size. The image ID is a unique identifier for the image, and an image can have multiple tags.
If you look closely, you will notice that the size indicated here is different from the image size shown on Docker Hub.
For example, the openjdk:latest image is 495 MB when pulled, but it is displayed as 244 MB on Docker Hub. This is because the volume displayed on Docker Hub is the compressed volume. During the process of image download and upload, the image is in a compressed state. Therefore, the size displayed by Docker Hub is the volume of network traffic that we are concerned with.
The docker images
command displays the size of the expanded image after it is downloaded to the local machine. More precisely, it is the sum of the spaces occupied by the expanded layers. When we view the space on the local machine, we are more concerned with the size of the local disk space occupied.
Another thing to note is that the total size of the images listed by the docker images
command is not the actual disk consumption of all the images. Since Docker images are stored as layered storage and can be inherited and reused, different images may have common layers if they use the same base image. Due to the use of Union FS by Docker, the identical layers only need to be saved once. Therefore, the actual disk space occupied by the images may be much smaller than the total size of the images listed in this list.
Dangling Images
If the images listed in the images
section do not have a repository name or a tag and are all marked as “<none>
”, like this:
<none> <none> 00285df0df87 5 days ago 342 MB
These images are called dangling images. Dangling images are often caused by the fact that when the images in the repository are updated and re-pulled, the tags of the old images are transferred to the new images. Another reason is that the build operation fails when building the image. Generally, dangling images have lost their value of existence and can be deleted at will.
You can use the following command to list all dangling images:
$ docker images -f dangling=true
And you can use the following command to delete them:
$ docker rmi $(docker images -q -f dangling=true)
Intermediate Layer Images
To speed up image construction and reuse resources, Docker uses intermediate layer images. So after using it for a certain period of time, you may see some intermediate layer images that the other images depend on.
Unlike the previous dangling images, many of these untagged images are intermediate layer images, which are images that other images depend on. These untagged images should not be deleted, otherwise, the upper-level images will fail due to the loss of dependencies. In fact, these images cannot be deleted by default. It is not necessary to delete these images because the same layers will only be stored once, and these images are the dependencies of other images. Therefore, listing them will not result in an additional copy being stored. In any case, you will still need them.
Once the images that depend on them are deleted, these dependent intermediate layer images will be deleted together.
By default, the docker images
command only displays the top-level images. If you want to display all images, including the intermediate layer images, you need to use the -a
parameter.
$ docker images -a
Listing Partial Images
Without any parameters, the docker images
command will list all top-level images. However, sometimes we only want to list certain images. The docker images
command has multiple parameters that can help with this.
- List images based on the repository name:
$ docker images Ubuntu
- List a specific image, which means specifying the repository name and tag:
$ docker images ubuntu:16.04
List images based on specified conditions
For example, list images that match based on the REPOSITORY and Tag:
$ docker images --filter=reference='busy*:*libc'
REPOSITORY TAG IMAGE ID CREATED SIZE
busybox uclibc e02e811dd08f 5 weeks ago 1.09 MB
busybox glibc 21c16b6787c6 5 weeks ago 4.19 MB
You can also use multiple filters:
$ docker images --filter=reference='busy*:uclibc' --filter=reference='busy*:glibc'
REPOSITORY TAG IMAGE ID CREATED SIZE
busybox uclibc e02e811dd08f 5 weeks ago 1.09 MB
busybox glibc 21c16b6787c6 5 weeks ago 4.19 MB
The --filter
option supports the following keywords:
dangling
(boolean - true or false
): List dangling images.label
(label=
orlabel==
): List images based on specified label.before
([:]
,or
): List images created before the given image ID.since
([:]
,or
): List images created referencing this image ID.reference
: List images based on the given matching conditions.
Display in a specific format
By default, docker images
outputs a complete table, but sometimes we don’t need all that information. For example, when removing dangling images, we need to use docker images
to list the IDs of all dangling images, and then pass them as arguments to the docker rmi
command to delete those specific images. In this case, we can use the -q option to list only the IDs of the images:
$ docker images -q
5f515359c7f8
If you want to organize the columns and get the desired parsing result, you can use the Go template syntax.
For example, the following command will directly list the image results and only include the image ID and repository name:
$ docker images --format "{{.ID}}:{{.Repository}}"
5f515359c7f8: redis
Or if you want to display it as a table with equally spaced columns, including a header row similar to the default format, but with custom-defined columns:
$ docker images --format "table{{.ID}}\t{{.Repository}}\t{{.Tag}}"
IMAGE ID REPOSITORY TAG
5f515359c7f8 redis latest
The format
option supports the following keywords:
.ID
Image ID
.Repository
Image repository name
.Tag
Image tag
.Digest
Image digest
.CreatedSince
Time elapsed since the image was created
.CreatedAt
Image creation time
.Size
Image size
Delete Image #
To delete local images, you can use the docker image rm
command in the following format:
$ docker image rm [options] <image1> [<image2>...]
Alternatively, you can use the following format:
$ docker rmi [OPTIONS] IMAGE [IMAGE...]
More help can be found by running docker rmi --help
.
To complement the deletion process, you can use the docker images
command. By using docker images -q
, you can delete a batch of desired images with the docker rmi
command. The command to delete dangling images is as follows:
$ docker rmi $(docker images -q -f dangling=true)
For example, if you want to delete all images with the repository name “redis”:
$ docker rmi $(docker images -q redis)
By default, running containers cannot be deleted. You need to stop the containers before performing the deletion operation, or you can use the -f
parameter to force deletion.
Build Image #
Customizing Images with Dockerfile #
To build an image, you need to specify building instructions, and Dockerfile is the text file that contains custom instructions and formatting for building images.
Dockerfile provides a series of unified resource configuration syntax, and users can customize the configuration and build their own images with these syntax commands.
Before learning how to build an image, let’s first understand some basic introductions and commands of Dockerfile.
This Dockerfile mainly covers the following content:
- Docker build process
- Basic instructions in Dockerfile
- Customizing images using Dockerfile
The general process of building an image is as follows:
When the Docker Client receives user instructions, it parses the command line parameters and sends them to the Docker Server. Upon receiving the HTTP request, the Docker Server:
- Creates a temporary directory and extracts the file system specified by the context into that directory
- Reads and parses the Dockerfile
- Based on the parsed Dockerfile, it traverses all the instructions and distributes them to different modules for execution
- The parser instruction creates a temporary container for each instruction and executes the current command inside it. Then, by using commit, this container generates a layer of image. The collection of all layers is the result of the build. The image ID from the last commit becomes the final ID of this image.
Dockerfile Basic Instructions #
COPY
Copy files, format as follows:
COPY <source_path>...<target_path>
COPY ["<source_path1>",..."<target_path>"]
The COPY instruction has two formats, one similar to a command line, and one similar to a function call.
The COPY instruction copies files/directories from the <source_path> in the build context directory to the <target_path> in a new layer of the image.
For example:
COPY package.json /usr/src/app/
<source_path> can be multiple, and even include wildcards that follow the Go filepath.Match rules, such as:
COPY hom* /mydir/
COPY hom?.txt /mydir/
Note:
- <target_path> can be an absolute path within the container, as well as a relative path to the working directory (which can be specified using the WORKDIR instruction).
- <target_path> does not need to be created beforehand. If the directory does not exist, it will be created before copying the files.
In addition, it is important to note that when using the COPY instruction, various metadata of the source files will be preserved. This includes read, write, execute permissions, file modification times, and more. This feature is useful for customizing images, especially when building related files are managed using Git.
ADD Advanced File Copying
The ADD instruction has a format and nature similar to the COPY instruction, but it adds some additional functionality.
For example, the <source_path> can be a URL. In this case, the Docker engine will attempt to download the file at the specified URL and place it in the <target_path>. The permissions of the downloaded file will be automatically set to 600. If this is not the desired permission, an additional RUN layer is required to adjust the permissions.
Additionally, if the downloaded file is a compressed archive, it needs to be unpacked. Similarly, an additional RUN command is needed to decompress the file. Therefore, it is more practical and recommended to use the RUN instruction to download the file using tools like wget or curl, handle permissions, decompress, and then clean up unnecessary files. Therefore, this functionality is not very practical and is not recommended.
If the <source_path> is a tar compressed file, the ADD instruction will automatically decompress it to the <target_path> if the compression format is gzip, bzip2, or xz.
In some cases, this automatic decompression feature is very useful, such as in the official ubuntu image:
FROM scratch
ADD ubuntu-xenial-core-cloudimg-amd64-root.tar.gz /...
However, in some cases, if we really want to copy a compressed file without decompressing it, the ADD command cannot be used. The most appropriate use case for ADD is in the mentioned cases that require automatic decompression.
Therefore, when choosing between the COPY and ADD instructions, this principle can be followed: use the COPY instruction for all file copies, and only use the ADD instruction when automatic decompression is required.
CMD Container Startup Command
Docker containers are essentially processes. Since they are processes, when starting a container, the program to run and its parameters need to be specified. The CMD instruction is used to specify the default command to start the main process of the container.
In terms of formatting, the exec format is generally recommended. This format is parsed as a JSON array, so it must be wrapped in double quotation marks ("
), not single quotation marks ('
).
When mentioning CMD, it is necessary to mention the issue of running applications in the foreground or background in the container.
Docker is not a virtual machine. Applications in containers should be executed in the foreground, unlike virtual machines or physical machines where background services are started using upstart/systemd. There is no concept of background services in containers.
CMD service nginx start
will be interpreted as CMD [ "sh","-c", "service nginx start"]
, so the main process is actually sh
. When the service nginx start
command ends, sh
also ends, and as the main process, it exits, which naturally causes the container to exit.
The correct approach is to execute the nginx executable directly and require it to run in the foreground. For example:
CMD ["nginx","-g", "daemon off"]
ENTRYPOINT
The format of ENTRYPOINT is the same as the RUN instruction, divided into the exec format and the shell format. The purpose of ENTRYPOINT is the same as CMD, which is to specify the program and parameters to start the container.
When ENTRYPOINT is specified, the meaning of CMD changes. It is no longer the direct execution of its command, but rather passing the content of CMD as arguments to the ENTRYPOINT instruction. In other words, in practice, it becomes:
<ENTRYPOINT> "<CMD>"
In other words, it is equivalent to adding the new parameters to the command list that CMD executes inside the container (or in the Dockerfile), and then executing it.
Example
First, let’s take a look at the CMD command:
$ cat Dockerfile
FROM nginx
CMD ["echo","hello"]
$ docker build -t nginx:test .
$ docker run -it nginx:test
The effect is as follows:
Explanation
- The rebuilt image added a CMD command (which will override the previously existing nginx daemon off command in the image), so it will print out the newly added command when run, and the container will automatically be closed after running (because the command has finished executing).
Running a container and passing a command:
docker run -it nginx:test world
Explanation
- The newly passed command “world” will override the
echo hello
command in CMD, but since this command doesn’t exist, an error will be thrown.
Modify Dockerfile, change CMD to ENTRYPOINT
$ cat dockerfile
FROM nginx
ENTRYPOINT ["echo","hello"]
$ docker build -t nginx:ent .
$ docker run -it nginx:ent
$ docker run -it nginx:ent world
From the execution result, we can see that “world” is passed as a parameter.
WORKDIR specifies the working directory
Format:
WORKDIR <working directory path>
The WORKDIR instruction is used to specify the working directory (or current directory). The current directory for subsequent layers is changed to the specified directory. If the directory does not exist, WORKDIR will create it for you.
In a shell, two consecutive lines are executed in the same process environment, so the memory state modified by the previous command will directly affect the subsequent command. However, in a Dockerfile, the execution environment of these two RUN commands is completely different; they are executed in two separate containers.
Therefore, if you need to change the working directory for subsequent layers, you should use the WORKDIR instruction.
Example:
FROM nginx
WORKDIR /home
RUN pwd
ENTRYPOINT ["pwd"]
The effect is as follows:
ENV sets environment variables
Format:
ENV <key> <value>
ENV <key1>=<value1> <key2>=<value2>...
This instruction sets environment variables that can be directly used by subsequent instructions, such as RUN, and by the running application.
ENV NODE_VERSION 7.2.0
RUN touch $NODE_VERSION-txt
CMD ["ls"]
The following instructions support environment variable expansion:
ADD, COPY, ENV, EXPOSE, LABEL, USER, WORKDIR, VOLUME, STOPSIGNAL, ONBUILD.
ARG build arguments
Format:
ARG <parameter name>[=<default value>]
Build arguments and ENV have the same effect of setting environment variables. The difference is that the environment variables set by ARG in the build environment do not exist during container runtime. However, do not use ARG to store sensitive information such as passwords, because docker history
can still see all values.
The ARG instruction in the Dockerfile is used to define the parameter name and its default value. This default value can be overridden in the build command docker build using --build-arg <parameter name>=<value>
.
In versions before 1.13, the parameter name in --build-arg
was required to be defined with ARG in the Dockerfile before it could be used. In other words, the parameter specified by --build-arg
had to be used in the Dockerfile. If the corresponding parameter was not used, an error would occur and the build would exit.
Starting from version 1.13, this strict requirement has been relaxed. Instead of exiting with an error, a warning message is displayed and the build continues. This is helpful when using a CI system to build different Dockerfiles with the same build process, avoiding the need to modify the build command based on the content of each Dockerfile.
Here is an example:
$ cat Dockerfile
ARG full_name
ENV JAVA_APP_JAR $full_name
ENV AB_OFF true
ADD $JAVA_APP_JAR /deployments/
# Build
$ docker build -t image_name --build-arg full_name=full_name .
EXPOSE Declaring Ports
Format:
EXPOSE <port1> [<port2>...]
The EXPOSE instruction declares the ports on which a container will listen for connections at runtime. This is only a declaration and doesn’t automatically expose the ports when the application starts.
Writing this declaration in the Dockerfile has two benefits:
- It helps users of the image to understand which ports the image’s service listens on, making it easier to configure port mappings.
- When using random port mapping at runtime (e.g.,
docker run -P
), it automatically assigns a random host port to the EXPOSEd port.
It’s important to differentiate between EXPOSE and the -p hostPort:containerPort
flag at runtime. The -p flag is used to publish a container’s port(s) to the host, allowing access from the outside, whereas EXPOSE only specifies the intended port(s) the container wants to use and doesn’t auto-map them on the host.
VOLUME Defining Anonymous Volumes
Format:
VOLUME ["<path1>", "<path2>", ...]
VOLUME <path>
For example, VOLUME /data
will automatically mount /data
as an anonymous volume at runtime. Anything written to /data
inside the container will not be stored in the container’s writable layer, ensuring the statelessness of the container. However, this mount can be overridden at runtime. For example:
docker run -d -v /mydata:/data xxxx
In this command, a named volume mydata
is mounted instead of the anonymous volume defined in the Dockerfile.
Example:
$ cat Dockerfile
FROM nginx
WORKDIR /home
RUN pwd
VOLUME ["/data"]
$ docker build -t nginx:volume .
$ docker run -itd nginx:volume
By using docker inspect container_id
, you can see where this volume is mapped on the host. Create a file, write some test content to it, then exit the container. You’ll find that the file still exists on the host.
USER Specifying the Current User
The USER instruction is similar to WORKDIR; they both modify the environment state and affect the subsequent layers. While WORKDIR changes the working directory, USER changes the identity of subsequent RUN, CMD, and ENTRYPOINT commands.
Like WORKDIR, USER only helps switch to the specified user. This user must already be created, otherwise the switch cannot be made.
RUN groupadd -r redis && useradd -r -g redis redis
USER redis
RUN ["redis-server"]
If a script is executed as root and wants to change identity during execution (e.g., run a service process as a different user), using su or sudo is not recommended as it requires complex configuration and often fails without a TTY.
# Create the redis user and switch to another user using gosu
RUN groupadd -r redis && useradd -r -g redis redis
# Set CMD and execute it as a different user
CMD ["exec", "gosu", "redis", "redis-server"]
That’s about it for the commonly used instructions in Dockerfile. Once you are familiar with these instructions, you can write Dockerfile files more quickly.
Using Dockerfile to Customize Images
Understanding Dockerfile and Building Images #
Once you have familiarized yourself with the common instructions in a Dockerfile, you can write your own Dockerfile and use the docker build
command to build an image.
Let’s first take a look at the syntax for building an image:
$ docker build --help
$ docker build [OPTIONS] PATH | URL | -
The PATH
or URL
points to the context, which is the directory that contains the Dockerfile and any other resources needed to build the image.
To customize an image using a Dockerfile, you need to start with a base image and make modifications on top of it. For example, the following Dockerfile modifies a Redis image. The base image needs to be specified using the FROM
instruction, which is why it is a mandatory instruction and must be the first one in the Dockerfile.
Here’s a beginner’s example (for demonstration purposes only):
$ cat Dockerfile
FROM debian:jessie
RUN apt-get update
RUN apt-get install -y gcclibc6-dev make
RUN wget -O redis.tar.gz "http://download.redis.io/releases/redis-3.2.5.tar.gz"
RUN mkdir -p /usr/src/redis
RUN tar -xzf redis.tar.gz -C /usr/src/redis--strip-components=1
RUN make -C /usr/src/redis
RUN make -C /usr/src/redis install
Explanation:
FROM
specifies the base image and is a mandatory instruction.RUN
is used to execute command-line commands and is one of the most commonly used instructions.
Each instruction in a Dockerfile creates a layer, and RUN
is no exception. Each RUN
command behaves the same way as manually building an image: it creates a new layer, executes the commands on top of it, and commits the changes of that layer to form a new image.
The above Dockerfile creates 7 layers unnecessarily. It includes many things that are not required at runtime, such as the build environment and updated software packages. As a result, it produces bloated images with many unnecessary layers, which increases deployment time and is prone to errors. This is a common mistake made by many Docker beginners. Union file systems, such as AUFS, have a maximum limit on the number of layers (42 or 127 depending on the version), so it is important to minimize the number of layers.
Therefore, the above Dockerfile can be modified as shown below to reduce the number of layers:
FROM debian:jessie
RUN apt-get update \
&& apt-get install -y gcclibc6-dev make \
&& wget -O redis.tar.gz "http://download.redis.io/releases/redis-3.2.5.tar.gz" \
&& mkdir -p /usr/src/redis \
&& tar -xzf redis.tar.gz -C /usr/src/redis--strip-components=1 \
&& make -C /usr/src/redis \
&& make -C /usr/src/redis install
Another thing to note is that the Dockerfile does not have to be named “Dockerfile”. You can use the -f
option to specify a different Dockerfile name, in which case the working directory needs to be the context directory of the Dockerfile.
For example, if my Dockerfile is located in the /root/docker/
directory and there is a aa.json
file in that directory, the Dockerfile would look like this:
FROM debian:jessie
RUN build Deps='gcc libc6-dev make wget' \
&& mkdir -p /usr/src/redis
COPY ./aa.json /usr/src/redis
In this case, the build command would be:
docker build -t redis:v1 -f /root/docker/Dockerfile /root/docker/
If there are files or directories in the directory that you do not want to include in the build, you can use a .dockerignore
file with a syntax similar to .gitignore
to exclude them from being sent to the Docker engine during the build process.
echo ".git" > .dockerignore
Then, in the directory where the Dockerfile is located, you can build the image using the docker build
command mentioned above:
$ docker build -t redis:v3 .
Note:
.
represents the current directory (the directory where the Dockerfile is located), and it is also the context directory specified in the command.
That’s it for the information about working with images. Let’s now move on to working with containers.
Container Operations #
This section covers the following topics:
- Classification of Docker commands
- Starting a container
- Running a container in the background
- Viewing containers
- Entering a container
- Importing and exporting containers
- Deleting containers
Docker Command Categories #
Before diving into container operations, let’s take a look at some categories of Docker commands:
-
Docker Environment Information:
info
,version
-
Container Lifecycle Management:
create
,exec
,kill
,pause
,restart
,rm
,run
,start
,stop
,unpause
-
Image Repository Commands:
login
,logout
,pull
,push
,search
-
Image Management:
build
,images
,import
,load
,rmi
,save
,tag
,commit
-
Container Operations:
attach
,export
,inspect
,port
,ps
,rename
,stats
,top
,wait
,cp
,diff
,update
-
Container Resource Management:
volume
,network
-
System Information and Logging:
events
,history
,logs
(Theevents
command prints real-time system events of a container,history
prints the history versions information of a specified image, andlogs
prints running logs of the processes inside a container)
For more commands, please refer to the official documentation.
Starting a Container #
There are two ways to start a container: creating a new container based on an image and starting it, or restarting a container that is in a stopped state.
Command Syntax
$ docker run --help
Usage: docker run [OPTIONS] IMAGE [COMMAND] [ARG...]
To create and start a container, we mainly use the docker run
command. For example, the following command outputs “Hello World” and then stops the container.
$ docker run ubuntu:14.04 /bin/echo 'Hello world'
Hello world
Note: After the container starts, it only outputs “Hello World” and then closes.
The following command starts a bash terminal, allowing user interaction.
$ sudo docker run -t -i -m 2G --cpu-shares 1536 ubuntu:14.04 /bin/bash
root@af8bae53bdd3:/#
Where:
- The
-t
parameter allocates a pseudo-TTY and attaches it to the container’s standard input. - The
-i
parameter keeps the container’s standard input open. - The
-c
parameter assigns a value to the CPU shares for the running container. - The
-m
parameter limits the container’s memory, using B, K, M, G as units. - The
-v
parameter mounts a volume, and multiple-v
parameters can be used to mount multiple volumes simultaneously. - The
-p
parameter exposes the container’s port to the host port, in the formathostport:containerport
.
Please note that if the -d
(detached) parameter is not added, the container will stop running once you exit the container’s bash terminal.
Starting and Stopping a Container
You can use the docker start/stop/restart
command to directly start/stop/restart an existing container. The core of a container is the application being executed, and the resources needed are essential for the application to run. Besides that, there are no other resources. You can use the ps
or top
command in the pseudo-TTY to view process information.
$ docker start/stop/restart <container_Id/containerName>
Running Containers in the Background #
Most of the time, you may want Docker to run in the background instead of directly outputting the results of the executed command on the current host. This can be achieved by adding the -d parameter.
Note:
Whether the container will run for a long time depends on the command specified with docker run
, not on the -d parameter. To run a Docker container in the background, you must add the -it parameter. When started with the -d parameter, it will return a unique id, which can also be viewed using the docker ps
command.
To retrieve the output information of a container, use the following command.
docker logs -f [container ID or container NAMES]
Stopping a Background-running Container
You can use docker container stop
to terminate a running container. When the specified application in the Docker container ends, the container will also automatically terminate. If you start a container with only one terminal, the container will immediately terminate when the user exits the terminal using the exit command or Ctrl+d.
Terminated containers can be seen using the docker container ls -a
or docker ps -a
command.
For example:
$ sudo docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ba267838cc1b ubuntu:14.04 "/bin/bash" 30 minutes ago Exited (0) About a minute ago trusting_newton
98e5efa7d997 training/webapp:latest "python app.py" About an hour ago Exited (0) 34 minutes ago backstabbing_pike
Containers in terminated state can be restarted using the docker start
command (if it was not accidentally terminated). The docker restart
command will terminate a running container and then restart it.
Viewing Containers #
When a container is up and running, how can we view it? We can use the docker ps
command, which is used to list running containers.
First, let’s take a look at the parameters for the ps command:
$ docker ps -h
Flag shorthand -h has been deprecated, please use --help
Usage: docker ps [OPTIONS]
List containers
Options:
-a, --all Show all containers (default shows just running)
-f, --filter filter Filter output based on conditions provided
--format string Pretty-print containers using a Go template
-n, --last int Show n last created containers (includes all states) (default -1)
-l, --latest Show the latest created container (includes all states)
--no-trunc Don't truncate output
-q, --quiet Only display container IDs
-s, --size Display total file sizes
The -a
option is used to show all containers, regardless of their status (running or exited).
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
336dded76f59 gitlab/gitlab-ee "/assets/wrapper" 2 months ago Up 2 months (healthy) 0.0.0.0:80->80/tcp, 0.0.0.0:2222->22/tcp, 0.0.0.0:2443->443/tcp gitlab
854e0ae79353 goharbor/harbor-jobservice:v1.9.1 "/harbor/harbor_jobs…" 4 months ago Exited (128) 4 months ago harbor-jobservice
.....
The -f
option is used to filter containers based on certain conditions. For example, we can filter containers by name:
$ docker ps --filter "name=nostalgic"
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
715ebfcee040 busybox "top" 3 seconds ago Up 1 second i_am_nostalgic
9b6247364a03 busybox "top" 7 minutes ago Up 7 minutes nostalgic_stallman
We can also filter containers based on the exit code:
$ docker ps -a --filter 'exited=0'
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ea09c3c82f6e registry:latest /srv/run.sh 2 weeks ago Exited (0) 2 weeks ago 127.0.0.1:5000->5000/tcp desperate_leakey
$ docker ps -a --filter 'exited=137'
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b3e1c0ed5bfe ubuntu:latest "sleep 1000" 12 seconds ago Exited (137) 5 seconds ago grave_kowalevski
We can also filter containers based on their status. Available status options include created
, restarting
, running
, removing
, paused
, exited
, and dead
.
$ docker ps --filter status=running
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
715ebfcee040 busybox "top" 16 minutes ago Up 16 minutes i_am_nostalgic
$ docker ps --filter status=paused
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
673394ef1d4c busybox "top" About an hour ago Up About an hour (Paused)
The -f
option can also be used with the --format
keyword to format the output or retrieve specific container information. For example, if we want to retrieve the container ID and name:
$ docker ps -a --filter 'exited=0' --format "table {{.ID}}\t{{.Names}}"
CONTAINER ID NAMES
0f15dba71e5c runner-Zm2_mVvw-project-2-concurrent-0-cache-c33bcaa1fd2c77edfc3893b41966cea8
5e3cff545d1f runner-Zm2_mVvw-project-2-concurrent-0-cache-3c3f060a0374fc8bc39395164f415a70
The supported keywords for the --format
parameter are as follows:
.ID
: Container ID.Image
: Image ID.Command
: Command.CreatedAt
: Creation time.RunningFor
: Running time.Ports
: Exposed ports.Status
: Container status.Size
: Disk usage.Names
: Container name.Label
: Special container labels
That’s all about viewing containers.
Enter Container #
When using the -d parameter, the container will start in the background. Sometimes it is necessary to enter the container for operations, and there are many methods available, including using the docker attach/exec
command or the nsenter
tool.
exec command
sudo docker exec -it 775c7c9ee1e1 /bin/bash
Alternatively, download .bashrc_docker
and add its contents to .bashrc
$ cat .bashrc_docker
alias docker-pid="sudo docker inspect --format '{{.State.Pid}}'"
alias docker-ip="sudo docker inspect --format '{{ .NetworkSettings.IPAddress }}'"
#the implementation refs from https://github.com/jpetazzo/nsenter/blob/master/docker-enter
function docker-enter() {
#if [ -e $(dirname "$0")/nsenter ]; then
#Change for centos bash running
if [ -e $(dirname '$0')/nsenter ]; then
# with boot2docker, nsenter is not in the PATH but it is in the same folder
NSENTER=$(dirname "$0")/nsenter
else
# if nsenter has already been installed with path notified, here will be clarified
NSENTER=$(which nsenter)
#NSENTER=nsenter
fi
[ -z "$NSENTER" ] && echo "WARN Cannot find nsenter" && return
if [ -z "$1" ]; then
echo "Usage: `basename "$0"` CONTAINER [COMMAND [ARG]...]"
echo ""
echo "Enters the Docker CONTAINER and executes the specified COMMAND."
echo "If COMMAND is not specified, runs an interactive shell in CONTAINER."
else
PID=$(sudo docker inspect --format "{{.State.Pid}}" "$1")
if [ -z "$PID" ]; then
echo "WARN Cannot find the given container"
return
fi
shift
OPTS="--target $PID --mount --uts --ipc --net --pid"
if [ -z "$1" ]; then
# No command given.
# Use su to clear all host environment variables except for TERM,
# initialize the environment variables HOME, SHELL, USER, LOGNAME, PATH,
# and start a login shell.
#sudo $NSENTER "$OPTS" su - root
sudo $NSENTER --target $PID --mount --uts --ipc --net --pid su - root
else
# Use env to clear all host environment variables.
sudo $NSENTER --target $PID --mount --uts --ipc --net --pid env -i $@
fi
fi
}
# Place the above content in the .bashrc file
$ source ~/.bashrc
This file defines many convenient commands for using Docker, such as docker-pid
which can obtain the PID of a specific container and docker-enter
which can enter the container or execute commands directly within the container.
$ docker-enter c0c00b21f8f8
root@c0c00b21f8f8:~# ls
root@c0c00b21f8f8:~# pwd
/root
root@c0c00b21f8f8:~# exit
logout
$ docker-pid c0c00b21f8f8
11975
Deleting Containers #
You can use the docker rm
command to delete a container in a terminated state. For example,
$sudo docker rm trusting_newton(<container_name>)
trusting_newton
If you want to delete a running container, you can add the -f
parameter. Docker will send a SIGKILL signal to the container.
Cleaning up all containers in a terminated state
You can use the docker ps -a
command to view all created containers, including those in a terminated state. If there are too many containers to delete one by one, you can use docker rm $(docker ps -a -q)
to clean up all of them.
Note: This command will actually attempt to delete all containers, including those that are still running. However, as mentioned above, docker rm
does not delete running containers by default.
Deleting all terminated containers
$ docker container prune # Delete all stopped containers
-f to force delete, --filter "until=24h" to delete containers stopped for more than 24 hours
That concludes the introduction to docker containers.