Open these slides: https://tampa.bretfisher.com/
Get a server: I provisioned one for each. Ask me for the IPs.
Access your server over SSH ssh docker@w.x.y.z
or WebSSH (http://w.x.y.z:8080)
Let me know if you can't get in, we have multiple backup options!
Note
This is hands on. You'll want to do most of these commands with me.
These slides are take-home.
Hello! I'm Bret Fisher (@bretfisher), a fan of 🐳 🏖 🥃 👾 ✈️ 🐶
I'm a DevOps Consultant+Trainer (300k students), OSS maintainer, and Docker Captain.
👉 Watch: My weekly cloud native DevOps live show with guests. Join us on Thursdays!
👉 Listen: That show turns into a podcast called "DevOps and Docker Talk."
👉 Read: You can get my weekly updates in my Newsletter.
👉 Chat: Join 12k DevOps pros in my Discord server devops.fan logistics-bret.md
We recommend that you open these slides in your browser:
This is a public URL, you're welcome to share it with others!
Use arrows to move to next/previous slide
(up, down, left, right, page up, page down)
Type a slide number + ENTER to go to that slide
The slide number is also visible in the URL bar
(e.g. .../#123 for slide 123)
The sources of these slides are available in a public GitHub repository:
These slides are written in Markdown
You are welcome to share, re-use, re-mix these slides
Typos? Mistakes? Questions? Feel free to hover over the bottom of the slide ...
👇 Try it! The source file will be shown and you can view it on GitHub and fork and edit it.
Slides will remain online so you can review them later if needed
(let's say we'll keep them online at least 1 year, how about that?)
You can download the slides using that URL:
https://tampa.bretfisher.com/slides.zip
(then open the file docker.yml.html
)
You can also generate a PDF of the slides
(by printing them to a file; but be patient with your browser!)
This slide has a little magnifying glass in the top left corner
This magnifying glass indicates slides that provide extra details
Feel free to skip them if:
you are in a hurry
you are new to this and want to avoid cognitive overload
you want only the most essential information
You can review these slides another time if you want, they'll be waiting for you ☺
(auto-generated TOC)
Docker Inc makes many tools to build, deploy, and run containers.
They invented the modern way to run "containers". (previously jails, chroot, zones, etc.)
Docker Inc makes many tools to build, deploy, and run containers.
They invented the modern way to run "containers". (previously jails, chroot, zones, etc.)
Their original 2013 ideas are now an industry standard called OCI.
Those standards are now used by hundreds of tools in the Cloud Native computing.
Docker Inc makes many tools to build, deploy, and run containers.
They invented the modern way to run "containers". (previously jails, chroot, zones, etc.)
Their original 2013 ideas are now an industry standard called OCI.
Those standards are now used by hundreds of tools in the Cloud Native computing.
The three innovations are:
I wrote a big article around this with lots of details. Bookmark for later!
Docker Inc was quicly formed after Docker the tool/project was created in 2013.
Previously, Docker Inc. focused on Dev and Ops tooling (2013-2019).
Docker Inc was quicly formed after Docker the tool/project was created in 2013.
Previously, Docker Inc. focused on Dev and Ops tooling (2013-2019).
In 2019 they sold 2/3rd of company and Enterprise-focused software to Mirantis.
Now they are (finally) successful focusing on Dev tooling.
Docker Inc was quicly formed after Docker the tool/project was created in 2013.
Previously, Docker Inc. focused on Dev and Ops tooling (2013-2019).
In 2019 they sold 2/3rd of company and Enterprise-focused software to Mirantis.
Now they are (finally) successful focusing on Dev tooling.
Docker Subscription includes:
Docker Inc was quicly formed after Docker the tool/project was created in 2013.
Previously, Docker Inc. focused on Dev and Ops tooling (2013-2019).
In 2019 they sold 2/3rd of company and Enterprise-focused software to Mirantis.
Now they are (finally) successful focusing on Dev tooling.
Docker Subscription includes:
We're only using Docker open source today!
Their Hub & Docker Desktop are totally free while learning and for personal use.
docker
the tool?"Installing Docker" really means "Installing the Docker Engine and CLI".
The Docker Engine is a daemon (a service running in the background).
docker
the tool?"Installing Docker" really means "Installing the Docker Engine and CLI".
The Docker Engine is a daemon (a service running in the background).
This daemon manages containers, the same way that a hypervisor manages VMs.
We interact with the Docker Engine by using the Docker CLI.
docker
the tool?"Installing Docker" really means "Installing the Docker Engine and CLI".
The Docker Engine is a daemon (a service running in the background).
This daemon manages containers, the same way that a hypervisor manages VMs.
We interact with the Docker Engine by using the Docker CLI.
The Docker CLI and the Docker Engine communicate through an API.
There are many other programs and client libraries which use that API.
This is a common misconception.
Docker only controls many containers on a single server.
Kubernetes (K8s) was invented to control Docker across many servers.
This is a common misconception.
Docker only controls many containers on a single server.
Kubernetes (K8s) was invented to control Docker across many servers.
Kubernetes doesn't run containers itself, it only controls a runtime.
This is a common misconception.
Docker only controls many containers on a single server.
Kubernetes (K8s) was invented to control Docker across many servers.
Kubernetes doesn't run containers itself, it only controls a runtime.
For years, Docker (dockerd
) was the most popular container runtime.
Then Docker Inc. created containerd
as a lightweight runtime for servers.
This is a common misconception.
Docker only controls many containers on a single server.
Kubernetes (K8s) was invented to control Docker across many servers.
Kubernetes doesn't run containers itself, it only controls a runtime.
For years, Docker (dockerd
) was the most popular container runtime.
Then Docker Inc. created containerd
as a lightweight runtime for servers.
Today dockerd
and containerd
are most of runtime market. Others include CRI-O and Podman (Red Hat).
This is a common misconception.
Docker only controls many containers on a single server.
Kubernetes (K8s) was invented to control Docker across many servers.
Kubernetes doesn't run containers itself, it only controls a runtime.
For years, Docker (dockerd
) was the most popular container runtime.
Then Docker Inc. created containerd
as a lightweight runtime for servers.
Today dockerd
and containerd
are most of runtime market. Others include CRI-O and Podman (Red Hat).
dockerd
or podman
= best for humans locally. containerd
or cri-o
= best for K8s.
Docker Swarm "mode" is still a thing
And might be having a renaissance
(I have a course on that too!) Todays_Agenda.md
Our training environment
(automatically generated title slide)
If you are attending a tutorial or workshop:
If you are doing or re-doing this course on your own, you can:
install Docker locally (as explained in the chapter "Installing Docker")
install Docker on e.g. a cloud VM
use https://www.play-with-docker.com/ to instantly get a training environment
containers/Training_Environment.md
Once logged in, make sure that you can run a basic Docker command:
$ docker versionClient: Docker Engine - Community Version: 20.10.17 API version: 1.41 Go version: go1.17.11 Git commit: 100c701 Built: Mon Jun 6 23:02:46 2022 OS/Arch: linux/amd64 Context: default Experimental: trueServer: Docker Engine - Community Engine: Version: 20.10.17 API version: 1.41 (minimum version 1.12) Go version: go1.17.11 Git commit: a89b842 Built: Mon Jun 6 23:00:51 2022 OS/Arch: linux/amd64 Experimental: false...
If this doesn't work, raise your hand so that an instructor can assist you!
:EN:Container concepts :FR:Premier contact avec les conteneurs
:EN:- What's a container engine? :FR:- Qu'est-ce qu'un container engine ?
Our first containers
(automatically generated title slide)
At the end of this lesson, you will have:
Seen Docker in action.
Started your first containers.
containers/First_Containers.md
In your Docker environment, just run the following command:
$ docker run busybox echo hello worldhello world
(If your Docker install is brand new, you will also see a few extra lines,
corresponding to the download of the busybox
image.)
containers/First_Containers.md
We used one of the smallest, simplest images available: busybox
.
busybox
is typically used in embedded systems (phones, routers...)
We ran a single process and echo'ed hello world
.
containers/First_Containers.md
Let's run a more exciting container:
$ docker run -it ubunturoot@04c0bb0a6c07:/#
This is a brand new container.
It runs a bare-bones, no-frills ubuntu
system.
-it
is shorthand for -i -t
.
-i
tells Docker to connect us to the container's stdin.
-t
tells Docker that we want a pseudo-terminal.
containers/First_Containers.md
Try to run figlet
in our container.
root@04c0bb0a6c07:/# figlet hellobash: figlet: command not found
Alright, we need to install it.
containers/First_Containers.md
We want figlet
, so let's install it:
root@04c0bb0a6c07:/# apt-get update...Fetched 1514 kB in 14s (103 kB/s)Reading package lists... Doneroot@04c0bb0a6c07:/# apt-get install figletReading package lists... Done...
One minute later, figlet
is installed!
containers/First_Containers.md
The figlet
program takes a message as parameter.
root@04c0bb0a6c07:/# figlet hello _ _ _ | |__ ___| | | ___ | '_ \ / _ \ | |/ _ \ | | | | __/ | | (_) ||_| |_|\___|_|_|\___/
Beautiful! 😍
containers/First_Containers.md
Let's check how many packages are installed there.
root@04c0bb0a6c07:/# dpkg -l | wc -l97
dpkg -l
lists the packages installed in our container
wc -l
counts them
How many packages do we have on our host?
containers/First_Containers.md
Exit the container by logging out of the shell, like you would usually do.
(E.g. with ^D
or exit
)
root@04c0bb0a6c07:/# exit
Now, try to:
run dpkg -l | wc -l
. How many packages are installed?
run figlet
. Does that work?
containers/First_Containers.md
We ran an ubuntu
container on an Linux/Windows/macOS host.
They have different, independent packages.
Installing something on the host doesn't expose it to the container.
And vice-versa.
Even if both the host and the container have the same Linux distro!
We can run any container on any host.
(One exception: Windows containers can only run on Windows hosts; at least for now.)
containers/First_Containers.md
Our container is now in a stopped state.
It still exists on disk, but all compute resources have been freed up.
We will see later how to get back to that container.
containers/First_Containers.md
What if we start a new container, and try to run figlet
again?
$ docker run -it ubunturoot@b13c164401fb:/# figletbash: figlet: command not found
We started a brand new container.
The basic Ubuntu image was used, and figlet
is not here.
containers/First_Containers.md
Can we reuse that container that we took time to customize?
We can, but that's not the default workflow with Docker.
What's the default workflow, then?
Always start with a fresh container.
If we need something installed in our container, build a custom image.
That seems complicated!
We'll see that it's actually pretty easy!
And what's the point?
This puts a strong emphasis on automation and repeatability. Let's see why ...
containers/First_Containers.md
In the "pets vs. cattle" metaphor, there are two kinds of servers.
Pets:
have distinctive names and unique configurations
when they have an outage, we do everything we can to fix them
Cattle:
have generic names (e.g. with numbers) and generic configuration
configuration is enforced by configuration management, golden images ...
when they have an outage, we can replace them immediately with a new server
What's the connection with Docker and containers?
containers/First_Containers.md
When we use local VMs (with e.g. VirtualBox or VMware), our workflow looks like this:
create VM from base template (Ubuntu, CentOS...)
install packages, set up environment
work on project
when done, shut down VM
next time we need to work on project, restart VM as we left it
if we need to tweak the environment, we do it live
Over time, the VM configuration evolves, diverges.
We don't have a clean, reliable, deterministic way to provision that environment.
containers/First_Containers.md
With Docker, the workflow looks like this:
create container image with our dev environment
run container with that image
work on project
when done, shut down container
next time we need to work on project, start a new container
if we need to tweak the environment, we create a new image
We have a clear definition of our environment, and can share it reliably with others.
Let's see in the next chapters how to bake a custom image with figlet
!
:EN:- Running our first container :FR:- Lancer nos premiers conteneurs
Background containers
(automatically generated title slide)
Our first containers were interactive.
We will now see how to:
containers/Background_Containers.md
We will run a small custom container.
This container just displays the time every second.
$ docker run jpetazzo/clockFri Feb 20 00:28:53 UTC 2015Fri Feb 20 00:28:54 UTC 2015Fri Feb 20 00:28:55 UTC 2015...
^C
.jpetazzo/clock
.jpetazzo
.containers/Background_Containers.md
^C
doesn't work...Sometimes, ^C
won't be enough.
Why? And how can we stop the container in that case?
containers/Background_Containers.md
^C
SIGINT
gets sent to the container, which means:
SIGINT
gets sent to PID 1 (default case)
SIGINT
gets sent to foreground processes when running with -ti
But there is a special case for PID 1: it ignores all signals!
except SIGKILL
and SIGSTOP
except signals handled explicitly
TL,DR: there are many circumstances when ^C
won't stop the container.
containers/Background_Containers.md
PID 1 has some extra responsibilities:
it starts (directly or indirectly) every other process
when a process exits, its processes are "reparented" under PID 1
When PID 1 exits, everything stops:
on a "regular" machine, it causes a kernel panic
in a container, it kills all the processes
We don't want PID 1 to stop accidentally
That's why it has these extra protections
containers/Background_Containers.md
Start another terminal and forget about them
(for now!)
We'll shortly learn about docker kill
containers/Background_Containers.md
Containers can be started in the background, with the -d
flag (daemon mode):
$ docker run -d jpetazzo/clock47d677dcfba4277c6cc68fcaa51f932b544cab1a187c853b7d0caf4e8debe5ad
containers/Background_Containers.md
How can we check that our container is still running?
With docker ps
, just like the UNIX ps
command, lists running processes.
$ docker psCONTAINER ID IMAGE ... CREATED STATUS ...47d677dcfba4 jpetazzo/clock ... 2 minutes ago Up 2 minutes ...
Docker tells us:
Up
) for a couple of minutes.containers/Background_Containers.md
Let's start two more containers.
$ docker run -d jpetazzo/clock57ad9bdfc06bb4407c47220cf59ce21585dce9a1298d7a67488359aeaea8ae2a
$ docker run -d jpetazzo/clock068cc994ffd0190bbe025ba74e4c0771a5d8f14734af772ddee8dc1aaf20567d
Check that docker ps
correctly reports all 3 containers.
containers/Background_Containers.md
When many containers are already running, it can be useful to see only the last container that was started.
This can be achieved with the -l
("Last") flag:
$ docker ps -lCONTAINER ID IMAGE ... CREATED STATUS ...068cc994ffd0 jpetazzo/clock ... 2 minutes ago Up 2 minutes ...
containers/Background_Containers.md
Many Docker commands will work on container IDs: docker stop
, docker rm
...
If we want to list only the IDs of our containers (without the other columns
or the header line),
we can use the -q
("Quiet", "Quick") flag:
$ docker ps -q068cc994ffd057ad9bdfc06b47d677dcfba4
containers/Background_Containers.md
We can combine -l
and -q
to see only the ID of the last container started:
$ docker ps -lq068cc994ffd0
At a first glance, it looks like this would be particularly useful in scripts.
However, if we want to start a container and get its ID in a reliable way,
it is better to use docker run -d
, which we will cover in a bit.
(Using docker ps -lq
is prone to race conditions: what happens if someone
else, or another program or script, starts another container just before
we run docker ps -lq
?)
containers/Background_Containers.md
We told you that Docker was logging the container output.
Let's see that now.
$ docker logs 068Fri Feb 20 00:39:52 UTC 2015Fri Feb 20 00:39:53 UTC 2015...
logs
command will output the entire logs of the container.
containers/Background_Containers.md
To avoid being spammed with eleventy pages of output,
we can use the --tail
option:
$ docker logs --tail 3 068Fri Feb 20 00:55:35 UTC 2015Fri Feb 20 00:55:36 UTC 2015Fri Feb 20 00:55:37 UTC 2015
containers/Background_Containers.md
Just like with the standard UNIX command tail -f
, we can
follow the logs of our container:
$ docker logs --tail 1 --follow 068Fri Feb 20 00:57:12 UTC 2015Fri Feb 20 00:57:13 UTC 2015^C
^C
to exit.containers/Background_Containers.md
There are two ways we can terminate our detached container.
docker kill
command.docker stop
command.The first one stops the container immediately, by using the
KILL
signal.
The second one is more graceful. It sends a TERM
signal,
and after 10 seconds, if the container has not stopped, it
sends KILL.
Reminder: the KILL
signal cannot be intercepted, and will
forcibly terminate the container.
containers/Background_Containers.md
Let's stop one of those containers:
$ docker stop 47d647d6
This will take 10 seconds:
containers/Background_Containers.md
Let's be less patient with the two other containers:
$ docker kill 068 57ad06857ad
The stop
and kill
commands can take multiple container IDs.
Those containers will be terminated immediately (without the 10-second delay).
Let's check that our containers don't show up anymore:
$ docker ps
containers/Background_Containers.md
We can also see stopped containers, with the -a
(--all
) option.
$ docker ps -aCONTAINER ID IMAGE ... CREATED STATUS068cc994ffd0 jpetazzo/clock ... 21 min. ago Exited (137) 3 min. ago57ad9bdfc06b jpetazzo/clock ... 21 min. ago Exited (137) 3 min. ago47d677dcfba4 jpetazzo/clock ... 23 min. ago Exited (137) 3 min. ago5c1dfd4d81f1 jpetazzo/clock ... 40 min. ago Exited (0) 40 min. agob13c164401fb ubuntu ... 55 min. ago Exited (130) 53 min. ago
:EN:- Foreground and background containers :FR:- Exécution interactive ou en arrière-plan
Understanding Docker images
(automatically generated title slide)
In this section, we will explain:
What is an image.
What is a layer.
The various image namespaces.
How to search and download images.
Image tags and when to use them.
Image = files + metadata
These files form the root filesystem of our container.
The metadata can indicate a number of things, e.g.:
Images are made of layers, conceptually stacked on top of each other.
Each layer can add, change, and remove files and/or metadata.
Images can share layers to optimize disk usage, transfer times, and memory use.
Each of the following items will correspond to one layer:
(Note: app config is generally added by orchestration facilities.)
An image is a read-only filesystem.
A container is an encapsulated set of processes,
running in a read-write copy of that filesystem.
To optimize container boot time, copy-on-write is used instead of regular copy.
docker run
starts a container from a given image.
If an image is read-only, how do we change it?
We don't.
We create a new container from that image.
Then we make changes to that container.
When we are satisfied with those changes, we transform them into a new layer.
A new image is created by stacking the new layer on top of the old image.
The only way to create an image is by "freezing" a container.
The only way to create a container is by instantiating an image.
Help!
There is a special empty image called scratch
.
The docker import
command loads a tarball into Docker.
Note: you will probably never have to do this yourself.
docker commit
docker build
(used 99% of the time)
We will explain both methods in a moment.
There are three namespaces:
Official images
e.g. ubuntu
, busybox
...
User (and organizations) images
e.g. bretfisher/clock
Self-hosted images
e.g. registry.example.com:5000/my-private/image
Let's explain each of them.
The root namespace is for official images.
They are gated by Docker Inc.
They are generally authored and maintained by third parties.
Those images include:
Small, "swiss-army-knife" images like busybox.
Distro images to be used as bases for your builds, like ubuntu, fedora...
Ready-to-use components and services, like redis, postgresql...
Over 150 at this point!
The user namespace holds images for Docker Hub users and organizations.
For example:
bretfisher/clock
The Docker Hub user is:
bretfisher
The image name is:
clock
This namespace holds images which are not hosted on Docker Hub, but on third party registries.
They contain the hostname (or IP address), and optionally the port, of the registry server.
For example:
localhost:5000/wordpress
localhost:5000
is the host and port of the registrywordpress
is the name of the imageOther examples:
quay.io/coreos/etcdgcr.io/google-containers/hugo
Images can be stored:
You can use the Docker client to download (pull) or upload (push) images.
To be more accurate: you can use the Docker client to tell a Docker Engine to push and pull images to and from a registry.
Let's look at what images are on our host now.
$ docker imagesREPOSITORY TAG IMAGE ID CREATED SIZEfedora latest ddd5c9c1d0f2 3 days ago 204.7 MBcentos latest d0e7f81ca65c 3 days ago 196.6 MBubuntu latest 07c86167cdc4 4 days ago 188 MBredis latest 4f5f397d4b7c 5 days ago 177.6 MBpostgres latest afe2b5e1859b 5 days ago 264.5 MBalpine latest 70c557e50ed6 5 days ago 4.798 MBdebian latest f50f9524513f 6 days ago 125.1 MBbusybox latest 3240943c9ea3 2 weeks ago 1.114 MBtraining/namer latest 902673acc741 9 months ago 289.3 MBjpetazzo/clock latest 12068b93616f 12 months ago 2.433 MB
There are two ways to download images.
Explicitly, with docker pull
.
Implicitly, when executing docker run
and the image is not found locally.
$ docker pull debian:jessiePulling repository debianb164861940b8: Download completeb164861940b8: Pulling image (jessie) from debiand1881793a057: Download complete
As seen previously, images are made up of layers.
Docker has downloaded all the necessary layers.
In this example, :jessie
indicates which exact version of Debian
we would like.
It is a version tag.
Images can have tags.
Tags define image versions or variants.
docker pull ubuntu
will refer to ubuntu:latest
.
The :latest
tag is generally updated often.
Don't specify tags:
Do specify tags:
This is similar to what we would do with pip install
, npm install
, etc.
An image can support multiple architectures
More precisely, a specific tag in a given repository can have either:
a single manifest referencing an image for a single architecture
a manifest list (or fat manifest) referencing multiple images
In a manifest list, each image is identified by a combination of:
os
(linux, windows)
architecture
(amd64, arm, arm64...)
optional fields like variant
(for arm and arm64), os.version
(for windows)
The Docker Engine will pull "native" images when available
(images matching its own os/architecture/variant)
We can ask for a specific image platform with --platform
The Docker Engine can run non-native images thanks to QEMU+binfmt
(automatically on Docker Desktop; with a bit of setup on Linux)
We've learned how to:
:EN:Building images :EN:- Containers, images, and layers :EN:- Image addresses and tags :EN:- Finding and transferring images
:FR:Construire des images :FR:- La différence entre un conteneur et une image :FR:- La notion de layer partagé entre images
Building Docker images with a Dockerfile
(automatically generated title slide)
We will build a container image automatically, with a Dockerfile
.
At the end of this lesson, you will be able to:
Write a Dockerfile
.
Build an image from a Dockerfile
.
containers/Building_Images_With_Dockerfiles.md
Dockerfile
overviewA Dockerfile
is a build recipe for a Docker image.
It contains a series of instructions telling Docker how an image is constructed.
The docker build
command builds an image from a Dockerfile
.
containers/Building_Images_With_Dockerfiles.md
Dockerfile
Our Dockerfile must be in a new, empty directory.
Dockerfile
.$ mkdir myimage
Dockerfile
inside this directory.$ cd myimage$ vim Dockerfile
Of course, you can use any other editor of your choice.
containers/Building_Images_With_Dockerfiles.md
FROM ubuntuRUN apt-get updateRUN apt-get install figlet
FROM
indicates the base image for our build.
Each RUN
line will be executed by Docker during the build.
Our RUN
commands must be non-interactive.
(No input can be provided to Docker during the build.)
In many cases, we will add the -y
flag to apt-get
.
containers/Building_Images_With_Dockerfiles.md
Save our file, then execute:
$ docker build -t figlet .
-t
indicates the tag to apply to the image.
.
indicates the location of the build context.
We will talk more about the build context later.
To keep things simple for now: this is the directory where our Dockerfile is located.
containers/Building_Images_With_Dockerfiles.md
It depends if we're using BuildKit or not!
If there are lots of blue lines and the first line looks like this:
[+] Building 1.8s (4/6)
... then we're using BuildKit.
If the output is mostly black-and-white and the first line looks like this:
Sending build context to Docker daemon 2.048kB
... then we're using the "classic" or "old-style" builder.
containers/Building_Images_With_Dockerfiles.md
Classic builder:
copies the whole "build context" to the Docker Engine
linear (processes lines one after the other)
requires a full Docker Engine
BuildKit:
only transfers parts of the "build context" when needed
will parallelize operations (when possible)
can run in non-privileged containers (e.g. on Kubernetes)
containers/Building_Images_With_Dockerfiles.md
The output of docker build
looks like this:
docker build -t figlet .Sending build context to Docker daemon 2.048kBStep 1/3 : FROM ubuntu ---> f975c5035748Step 2/3 : RUN apt-get update ---> Running in e01b294dbffd(...output of the RUN command...)Removing intermediate container e01b294dbffd ---> eb8d9b561b37Step 3/3 : RUN apt-get install figlet ---> Running in c29230d70f9b(...output of the RUN command...)Removing intermediate container c29230d70f9b ---> 0dfd7a253f21Successfully built 0dfd7a253f21Successfully tagged figlet:latest
RUN
commands has been omitted.containers/Building_Images_With_Dockerfiles.md
Sending build context to Docker daemon 2.048 kB
The build context is the .
directory given to docker build
.
It is sent (as an archive) by the Docker client to the Docker daemon.
This allows to use a remote machine to build using local files.
Be careful (or patient) if that directory is big and your link is slow.
You can speed up the process with a .dockerignore
file
It tells docker to ignore specific files in the directory
Only ignore files that you won't need in the build context!
containers/Building_Images_With_Dockerfiles.md
Step 2/3 : RUN apt-get update ---> Running in e01b294dbffd(...output of the RUN command...)Removing intermediate container e01b294dbffd ---> eb8d9b561b37
A container (e01b294dbffd
) is created from the base image.
The RUN
command is executed in this container.
The container is committed into an image (eb8d9b561b37
).
The build container (e01b294dbffd
) is removed.
The output of this step will be the base image for the next one.
containers/Building_Images_With_Dockerfiles.md
[+] Building 7.9s (7/7) FINISHED => [internal] load build definition from Dockerfile 0.0s => => transferring dockerfile: 98B 0.0s => [internal] load .dockerignore 0.0s => => transferring context: 2B 0.0s => [internal] load metadata for docker.io/library/ubuntu:latest 1.2s => [1/3] FROM docker.io/library/ubuntu@sha256:cf31af331f38d1d7158470e095b132acd126a7180a54f263d386 3.2s => => resolve docker.io/library/ubuntu@sha256:cf31af331f38d1d7158470e095b132acd126a7180a54f263d386 0.0s => => sha256:cf31af331f38d1d7158470e095b132acd126a7180a54f263d386da88eb681d93 1.20kB / 1.20kB 0.0s => => sha256:1de4c5e2d8954bf5fa9855f8b4c9d3c3b97d1d380efe19f60f3e4107a66f5cae 943B / 943B 0.0s => => sha256:6a98cbe39225dadebcaa04e21dbe5900ad604739b07a9fa351dd10a6ebad4c1b 3.31kB / 3.31kB 0.0s => => sha256:80bc30679ac1fd798f3241208c14accd6a364cb8a6224d1127dfb1577d10554f 27.14MB / 27.14MB 2.3s => => sha256:9bf18fab4cfbf479fa9f8409ad47e2702c63241304c2cdd4c33f2a1633c5f85e 850B / 850B 0.5s => => sha256:5979309c983a2adeff352538937475cf961d49c34194fa2aab142effe19ed9c1 189B / 189B 0.4s => => extracting sha256:80bc30679ac1fd798f3241208c14accd6a364cb8a6224d1127dfb1577d10554f 0.7s => => extracting sha256:9bf18fab4cfbf479fa9f8409ad47e2702c63241304c2cdd4c33f2a1633c5f85e 0.0s => => extracting sha256:5979309c983a2adeff352538937475cf961d49c34194fa2aab142effe19ed9c1 0.0s => [2/3] RUN apt-get update 2.5s => [3/3] RUN apt-get install figlet 0.9s => exporting to image 0.1s => => exporting layers 0.1s => => writing image sha256:3b8aee7b444ab775975dfba691a72d8ac24af2756e0a024e056e3858d5a23f7c 0.0s => => naming to docker.io/library/figlet 0.0s
containers/Building_Images_With_Dockerfiles.md
BuildKit transfers the Dockerfile and the build context
(these are the first two [internal]
stages)
Then it executes the steps defined in the Dockerfile
([1/3]
, [2/3]
, [3/3]
)
Finally, it exports the result of the build
(image definition + collection of layers)
containers/Building_Images_With_Dockerfiles.md
When running BuildKit in e.g. a CI pipeline, its output will be different
We can see the same output format by using --progress=plain
containers/Building_Images_With_Dockerfiles.md
If you run the same build again, it will be instantaneous. Why?
After each build step, Docker takes a snapshot of the resulting image.
Before executing a step, Docker checks if it has already built the same sequence.
Docker uses the exact strings defined in your Dockerfile, so:
RUN apt-get install figlet cowsay
is different from
RUN apt-get install cowsay figlet
RUN apt-get update
is not re-executed when the mirrors are updated
You can force a rebuild with docker build --no-cache ...
.
containers/Building_Images_With_Dockerfiles.md
The resulting image is not different from the one produced manually.
$ docker run -ti figletroot@91f3c974c9a1:/# figlet hello _ _ _ | |__ ___| | | ___ | '_ \ / _ \ | |/ _ \ | | | | __/ | | (_) ||_| |_|\___|_|_|\___/
Yay! 🎉
containers/Building_Images_With_Dockerfiles.md
The history
command lists all the layers composing an image.
For each layer, it shows its creation time, size, and creation command.
When an image was built with a Dockerfile, each layer corresponds to a line of the Dockerfile.
$ docker history figletIMAGE CREATED CREATED BY SIZEf9e8f1642759 About an hour ago /bin/sh -c apt-get install fi 1.627 MB7257c37726a1 About an hour ago /bin/sh -c apt-get update 21.58 MB07c86167cdc4 4 days ago /bin/sh -c #(nop) CMD ["/bin 0 B<missing> 4 days ago /bin/sh -c sed -i 's/^#\s*\( 1.895 kB<missing> 4 days ago /bin/sh -c echo '#!/bin/sh' 194.5 kB<missing> 4 days ago /bin/sh -c #(nop) ADD file:b 187.8 MB
containers/Building_Images_With_Dockerfiles.md
sh -c
?On UNIX, to start a new program, we need two system calls:
fork()
, to create a new child process;
execve()
, to replace the new child process with the program to run.
Conceptually, execve()
works like this:
execve(program, [list, of, arguments])
When we run a command, e.g. ls -l /tmp
, something needs to parse the command.
(i.e. split the program and its arguments into a list.)
The shell is usually doing that.
(It also takes care of expanding environment variables and special things like ~
.)
containers/Building_Images_With_Dockerfiles.md
sh -c
?When we do RUN ls -l /tmp
, the Docker builder needs to parse the command.
Instead of implementing its own parser, it outsources the job to the shell.
That's why we see sh -c ls -l /tmp
in that case.
But we can also do the parsing jobs ourselves.
This means passing RUN
a list of arguments.
This is called the exec syntax.
containers/Building_Images_With_Dockerfiles.md
Dockerfile commands that execute something can have two forms:
plain string, or shell syntax:
RUN apt-get install figlet
JSON list, or exec syntax:
RUN ["apt-get", "install", "figlet"]
We are going to change our Dockerfile to see how it affects the resulting image.
containers/Building_Images_With_Dockerfiles.md
Let's change our Dockerfile as follows!
FROM ubuntuRUN apt-get updateRUN ["apt-get", "install", "figlet"]
Then build the new Dockerfile.
$ docker build -t figlet .
containers/Building_Images_With_Dockerfiles.md
Compare the new history:
$ docker history figletIMAGE CREATED CREATED BY SIZE27954bb5faaf 10 seconds ago apt-get install figlet 1.627 MB7257c37726a1 About an hour ago /bin/sh -c apt-get update 21.58 MB07c86167cdc4 4 days ago /bin/sh -c #(nop) CMD ["/bin 0 B<missing> 4 days ago /bin/sh -c sed -i 's/^#\s*\( 1.895 kB<missing> 4 days ago /bin/sh -c echo '#!/bin/sh' 194.5 kB<missing> 4 days ago /bin/sh -c #(nop) ADD file:b 187.8 MB
Exec syntax specifies an exact command to execute.
Shell syntax specifies a command to be wrapped within /bin/sh -c "..."
.
containers/Building_Images_With_Dockerfiles.md
shell syntax:
/bin/sh -c ...
) to parse the string/bin/sh
to exist in the containerexec syntax:
/bin/sh
to exist in the containercontainers/Building_Images_With_Dockerfiles.md
CMD
and ENTRYPOINT
(automatically generated title slide)
In this lesson, we will learn about two important Dockerfile commands:
CMD
and ENTRYPOINT
.
These commands allow us to set the default command to run in a container.
containers/Cmd_And_Entrypoint.md
When people run our container, we want to greet them with a nice hello message, and using a custom font.
For that, we will execute:
figlet -f script hello
-f script
tells figlet to use a fancy font.
hello
is the message that we want it to display.
containers/Cmd_And_Entrypoint.md
CMD
to our DockerfileOur new Dockerfile will look like this:
FROM ubuntuRUN apt-get updateRUN ["apt-get", "install", "figlet"]CMD figlet -f script hello
CMD
defines a default command to run when none is given.
It can appear at any point in the file.
Each CMD
will replace and override the previous one.
As a result, while you can have multiple CMD
lines, it is useless.
containers/Cmd_And_Entrypoint.md
Let's build it:
$ docker build -t figlet ....Successfully built 042dff3b4a8dSuccessfully tagged figlet:latest
And run it:
$ docker run figlet _ _ _ | | | | | | | | _ | | | | __ |/ \ |/ |/ |/ / \_| |_/|__/|__/|__/\__/
containers/Cmd_And_Entrypoint.md
CMD
If we want to get a shell into our container (instead of running
figlet
), we just have to specify a different program to run:
$ docker run -it figlet bashroot@7ac86a641116:/#
We specified bash
.
It replaced the value of CMD
.
containers/Cmd_And_Entrypoint.md
ENTRYPOINT
We want to be able to specify a different message on the command line,
while retaining figlet
and some default parameters.
In other words, we would like to be able to do this:
$ docker run figlet salut _ | | , __, | | _|_ / \_/ | |/ | | | \/ \_/|_/|__/ \_/|_/|_/
We will use the ENTRYPOINT
verb in Dockerfile.
containers/Cmd_And_Entrypoint.md
ENTRYPOINT
to our DockerfileOur new Dockerfile will look like this:
FROM ubuntuRUN apt-get updateRUN ["apt-get", "install", "figlet"]ENTRYPOINT ["figlet", "-f", "script"]
ENTRYPOINT
defines a base command (and its parameters) for the container.
The command line arguments are appended to those parameters.
Like CMD
, ENTRYPOINT
can appear anywhere, and replaces the previous value.
Why did we use JSON syntax for our ENTRYPOINT
?
containers/Cmd_And_Entrypoint.md
When CMD or ENTRYPOINT use string syntax, they get wrapped in sh -c
.
To avoid this wrapping, we can use JSON syntax.
What if we used ENTRYPOINT
with string syntax?
$ docker run figlet salut
This would run the following command in the figlet
image:
sh -c "figlet -f script" salut
containers/Cmd_And_Entrypoint.md
Let's build it:
$ docker build -t figlet ....Successfully built 36f588918d73Successfully tagged figlet:latest
And run it:
$ docker run figlet salut _ | | , __, | | _|_ / \_/ | |/ | | | \/ \_/|_/|__/ \_/|_/|_/
containers/Cmd_And_Entrypoint.md
CMD
and ENTRYPOINT
togetherWhat if we want to define a default message for our container?
Then we will use ENTRYPOINT
and CMD
together.
ENTRYPOINT
will define the base command for our container.
CMD
will define the default parameter(s) for this command.
They both have to use JSON syntax.
containers/Cmd_And_Entrypoint.md
CMD
and ENTRYPOINT
togetherOur new Dockerfile will look like this:
FROM ubuntuRUN apt-get updateRUN ["apt-get", "install", "figlet"]ENTRYPOINT ["figlet", "-f", "script"]CMD ["hello world"]
ENTRYPOINT
defines a base command (and its parameters) for the container.
If we don't specify extra command-line arguments when starting the container,
the value of CMD
is appended.
Otherwise, our extra command-line arguments are used instead of CMD
.
containers/Cmd_And_Entrypoint.md
Let's build it:
$ docker build -t myfiglet ....Successfully built 6e0b6a048a07Successfully tagged myfiglet:latest
Run it without parameters:
$ docker run myfiglet _ _ _ _ | | | | | | | | | | | _ | | | | __ __ ,_ | | __| |/ \ |/ |/ |/ / \_ | | |_/ \_/ | |/ / | | |_/|__/|__/|__/\__/ \/ \/ \__/ |_/|__/\_/|_/
containers/Cmd_And_Entrypoint.md
Now let's pass extra arguments to the image.
$ docker run myfiglet hola mundo _ _ | | | | | | | __ | | __, _ _ _ _ _ __| __ |/ \ / \_|/ / | / |/ |/ | | | / |/ | / | / \_| |_/\__/ |__/\_/|_/ | | |_/ \_/|_/ | |_/\_/|_/\__/
We overrode CMD
but still used ENTRYPOINT
.
containers/Cmd_And_Entrypoint.md
ENTRYPOINT
What if we want to run a shell in our container?
We cannot just do docker run myfiglet bash
because
that would just tell figlet to display the word "bash."
We use the --entrypoint
parameter:
$ docker run -it --entrypoint bash myfigletroot@6027e44e2955:/#
containers/Cmd_And_Entrypoint.md
CMD
and ENTRYPOINT
recapdocker run myimage
executes ENTRYPOINT
+ CMD
docker run myimage args
executes ENTRYPOINT
+ args
(overriding CMD
)
docker run --entrypoint prog myimage
executes prog
(overriding both)
Command | ENTRYPOINT |
CMD |
Result |
---|---|---|---|
docker run figlet |
none | none | Use values from base image (bash ) |
docker run figlet hola |
none | none | Error (executable hola not found) |
docker run figlet |
figlet -f script |
none | figlet -f script |
docker run figlet hola |
figlet -f script |
none | figlet -f script hola |
docker run figlet |
none | figlet -f script |
figlet -f script |
docker run figlet hola |
none | figlet -f script |
Error (executable hola not found) |
docker run figlet |
figlet -f script |
hello |
figlet -f script hello |
docker run figlet hola |
figlet -f script |
hello |
figlet -f script hola |
containers/Cmd_And_Entrypoint.md
ENTRYPOINT
vs CMD
ENTRYPOINT
is great for "containerized binaries".
Example: docker run consul --help
(Pretend that the docker run
part isn't there!)
CMD
is great for images with multiple binaries.
Example: docker run busybox ifconfig
(It makes sense to indicate which program we want to run!)
:EN:- CMD and ENTRYPOINT :FR:- CMD et ENTRYPOINT
Copying files during the build
(automatically generated title slide)
So far, we have installed things in our container images by downloading packages.
We can also copy files from the build context to the container that we are building.
Remember: the build context is the directory containing the Dockerfile.
In this chapter, we will learn a new Dockerfile keyword: COPY
.
containers/Copying_Files_During_Build.md
We want to build a container that compiles a basic "Hello world" program in C.
Here is the program, hello.c
:
int main () { puts("Hello, world!"); return 0;}
Let's create a new directory, and put this file in there.
Then we will write the Dockerfile.
containers/Copying_Files_During_Build.md
On Debian and Ubuntu, the package build-essential
will get us a compiler.
When installing it, don't forget to specify the -y
flag, otherwise the build will fail (since the build cannot be interactive).
Then we will use COPY
to place the source file into the container.
FROM ubuntuRUN apt-get updateRUN apt-get install -y build-essentialCOPY hello.c /RUN make helloCMD /hello
Create this Dockerfile.
containers/Copying_Files_During_Build.md
Create hello.c
and Dockerfile
in the same directory.
Run docker build -t hello .
in this directory.
Run docker run hello
, you should see Hello, world!
.
Success!
containers/Copying_Files_During_Build.md
COPY
and the build cacheRun the build again.
Now, modify hello.c
and run the build again.
Docker can cache steps involving COPY
.
Those steps will not be executed again if the files haven't been changed.
containers/Copying_Files_During_Build.md
We can COPY
whole directories recursively
It is possible to do e.g. COPY . .
(but it might require some extra precautions to avoid copying too much)
In older Dockerfiles, you might see the ADD
command; consider it deprecated
(it is similar to COPY
but can automatically extract archives)
If we really wanted to compile C code in a container, we would:
place it in a different directory, with the WORKDIR
instruction
even better, use the gcc
official image
containers/Copying_Files_During_Build.md
.dockerignore
We can create a file named .dockerignore
(at the top-level of the build context)
It can contain file names and globs to ignore
They won't be sent to the builder
(and won't end up in the resulting image)
See the documentation for the little details
(exceptions can be made with !
, multiple directory levels with **
...)
:EN:- Leveraging the build cache for faster builds :FR:- Tirer parti du cache afin d'optimiser la vitesse de build
Exercise — writing Dockerfiles
(automatically generated title slide)
Let's write Dockerfiles for an existing application!
Check out the code repository
Read all the instructions
Write Dockerfiles
Build and test them individually
containers/Exercise_Dockerfile_Basic.md
Clone the repository available at:
https://github.com/jpetazzo/wordsmith
It should look like this:
├── LICENSE├── README├── db/│ └── words.sql├── web/│ ├── dispatcher.go│ └── static/└── words/ ├── pom.xml └── src/
containers/Exercise_Dockerfile_Basic.md
The repository contains instructions in English and French.
For now, we only care about the first part (about writing Dockerfiles).
Place each Dockerfile in its own directory, like this:
├── LICENSE├── README├── db/│ ├── Dockerfile│ └── words.sql├── web/│ ├── Dockerfile│ ├── dispatcher.go│ └── static/└── words/ ├── Dockerfile ├── pom.xml └── src/
containers/Exercise_Dockerfile_Basic.md
Build and run each Dockerfile individually.
For db
, we should be able to see some messages confirming that the data set
was loaded successfully (some INSERT
lines in the container output).
For web
and words
, we should be able to see some message looking like
"server started successfully".
That's all we care about for now!
Bonus question: make sure that each container stops correctly when hitting Ctrl-C.
Place the following Compose file at the root of the repository:
version: "3"services: db: build: db words: build: words web: build: web ports: - 8888:80
Test the whole app by bringin up the stack and connecting to port 8888.
Docker Networking, Docker Compose, & Workflows local Dev & Test
Remember to signup for the Udemy courses (info on front page)
Do the example Dockerfile!
Dig into Bonus Sections
Container networking basics
(automatically generated title slide)
We will now run network services (accepting requests) in containers.
At the end of this section, you will be able to:
Run a network service in a container.
Connect to that network service.
Find a container's IP address.
containers/Container_Networking_Basics.md
We need something small, simple, easy to configure
(or, even better, that doesn't require any configuration at all)
Let's use the official NGINX image (named nginx
)
It runs a static web server listening on port 80
It serves a default "Welcome to nginx!" page
containers/Container_Networking_Basics.md
$ docker run -d -P nginx66b1ce719198711292c8f34f84a7b68c3876cf9f67015e752b94e189d35a204e
Docker will automatically pull the nginx
image from the Docker Hub
-d
/ --detach
tells Docker to run it in the background
P
/ --publish-all
tells Docker to publish all ports
(publish = make them reachable from other computers)
...OK, how do we connect to our web server now?
containers/Container_Networking_Basics.md
First, we need to find the port number used by Docker
(the NGINX container listens on port 80, but this port will be mapped)
We can use docker ps
:
$ docker psCONTAINER ID IMAGE ... PORTS ...e40ffb406c9e nginx ... 0.0.0.0:12345->80/tcp ...
This means:
port 12345 on the Docker host is mapped to port 80 in the container
Now we need to connect to the Docker host!
containers/Container_Networking_Basics.md
When running Docker on your Linux workstation:
use localhost
, or any IP address of your machine
When running Docker on a remote Linux server:
use any IP address of the remote machine
When running Docker Desktop on Mac or Windows:
use localhost
In other scenarios (docker-machine
, local VM...):
use the IP address of the Docker VM
containers/Container_Networking_Basics.md
Point your browser to the IP address of your Docker host, on the port
shown by docker ps
for container port 80.
containers/Container_Networking_Basics.md
You can also use curl
directly from the Docker host.
Make sure to use the right port number if it is different from the example below:
$ curl localhost:12345<!DOCTYPE html><html><head><title>Welcome to nginx!</title>...
containers/Container_Networking_Basics.md
There is metadata in the image telling "this image has something on port 80".
We can see that metadata with docker inspect
:
$ docker inspect --format '{{.Config.ExposedPorts}}' nginxmap[80/tcp:{}]
This metadata was set in the Dockerfile, with the EXPOSE
keyword.
We can see that with docker history
:
$ docker history nginxIMAGE CREATED CREATED BY7f70b30f2cc6 11 days ago /bin/sh -c #(nop) CMD ["nginx" "-g" "…<missing> 11 days ago /bin/sh -c #(nop) STOPSIGNAL [SIGTERM]<missing> 11 days ago /bin/sh -c #(nop) EXPOSE 80/tcp
containers/Container_Networking_Basics.md
Our Docker host has only one port 80
Therefore, we can only have one container at a time on port 80
Therefore, if multiple containers want port 80, only one can get it
By default, containers do not get "their" port number, but a random one
(not "random" as "crypto random", but as "it depends on various factors")
We'll see later how to force a port number (including port 80!)
containers/Container_Networking_Basics.md
Hey, my network-fu is strong, and I have questions...
Can I publish one container on 127.0.0.2:80, and another on 127.0.0.3:80?
My machine has multiple (public) IP addresses, let's say A.A.A.A and B.B.B.B.
Can I have one container on A.A.A.A:80 and another on B.B.B.B:80?
I have a whole IPV4 subnet, can I allocate it to my containers?
What about IPV6?
You can do all these things when running Docker directly on Linux.
(On other platforms, generally not, but there are some exceptions.)
containers/Container_Networking_Basics.md
Parsing the output of docker ps
would be painful.
There is a command to help us:
$ docker port <containerID> 800.0.0.0:12345
containers/Container_Networking_Basics.md
If you want to set port numbers yourself, no problem:
$ docker run -d -p 80:80 nginx$ docker run -d -p 8000:80 nginx$ docker run -d -p 8080:80 -p 8888:80 nginx
Note: the convention is port-on-host:port-on-container
.
containers/Container_Networking_Basics.md
There are many ways to integrate containers in your network.
Start the container, letting Docker allocate a public port for it.
Then retrieve that port number and feed it to your configuration.
Pick a fixed port number in advance, when you generate your configuration.
Then start your container by setting the port numbers manually.
Use an orchestrator like Kubernetes or Swarm.
The orchestrator will provide its own networking facilities.
Orchestrators typically provide mechanisms to enable direct container-to-container communication across hosts, and publishing/load balancing for inbound traffic.
containers/Container_Networking_Basics.md
We can use the docker inspect
command to find the IP address of the
container.
$ docker inspect --format '{{ .NetworkSettings.IPAddress }}' <yourContainerID>172.17.0.3
docker inspect
is an advanced command, that can retrieve a ton
of information about our containers.
Here, we provide it with a format string to extract exactly the private IP address of the container.
containers/Container_Networking_Basics.md
Let's try to ping our container from another container.
docker run alpine ping <ipaddress>PING 172.17.0.X (172.17.0.X): 56 data bytes64 bytes from 172.17.0.X: seq=0 ttl=64 time=0.106 ms64 bytes from 172.17.0.X: seq=1 ttl=64 time=0.250 ms64 bytes from 172.17.0.X: seq=2 ttl=64 time=0.188 ms
When running on Linux, we can even ping that IP address directly!
(And connect to a container's ports even if they aren't published.)
containers/Container_Networking_Basics.md
-p
and -P
?When running a stack of containers, we will often use Compose
Compose will take care of exposing containers
(through a ports:
section in the docker-compose.yml
file)
It is, however, fairly common to use docker run -P
for a quick test
Or docker run -p ...
when an image doesn't EXPOSE
a port correctly
containers/Container_Networking_Basics.md
We've learned how to:
Expose a network port.
Connect to an application running in a container.
Find a container's IP address.
:EN:- Exposing single containers :FR:- Exposer un conteneur isolé
The Container Network Model
(automatically generated title slide)
We will learn about the CNM (Container Network Model).
At the end of this lesson, you will be able to:
Create a private network for a group of containers.
Use container naming to connect services together.
Dynamically connect and disconnect containers to networks.
Set the IP address of a container.
We will also explain the principle of overlay networks and network plugins.
containers/Container_Network_Model.md
Docker has "networks".
We can manage them with the docker network
commands; for instance:
$ docker network lsNETWORK ID NAME DRIVER6bde79dfcf70 bridge bridge8d9c78725538 none nulleb0eeab782f4 host host4c1ff84d6d3f blog-dev overlay228a4355d548 blog-prod overlay
New networks can be created (with docker network create
).
(Note: networks none
and host
are special; let's set them aside for now.)
containers/Container_Network_Model.md
Conceptually, a Docker "network" is a virtual switch
(we can also think about it like a VLAN, or a WiFi SSID, for instance)
By default, containers are connected to a single network
(but they can be connected to zero, or many networks, even dynamically)
Each network has its own subnet (IP address range)
A network can be local (to a single Docker Engine) or global (span multiple hosts)
Containers can have network aliases providing DNS-based service discovery
(and each network has its own "domain", "zone", or "scope")
containers/Container_Network_Model.md
A container can be given a network alias
(e.g. with docker run --net some-network --net-alias db ...
)
The containers running in the same network can resolve that network alias
(i.e. if they do a DNS lookup on db
, it will give the container's address)
We can have a different db
container in each network
(this avoids naming conflicts between different stacks)
When we name a container, it automatically adds the name as a network alias
(i.e. docker run --name xyz ...
is like docker run --net-alias xyz ...
containers/Container_Network_Model.md
Networks are isolated
By default, containers in network A cannot reach those in network B
A container connected to both networks A and B can act as a router or proxy
Published ports are always reachable through the Docker host address
(docker run -P ...
makes a container port available to everyone)
containers/Container_Network_Model.md
We typically create one network per "stack" or app that we deploy
More complex apps or stacks might require multiple networks
(e.g. frontend
, backend
, ...)
Networks allow us to deploy multiple copies of the same stack
(e.g. prod
, dev
, pr-442
, ....)
If we use Docker Compose, this is managed automatically for us
containers/Container_Network_Model.md
CNM is the model used by Docker
Kubernetes uses a different model, architectured around CNI
(CNI is a kind of API between a container engine and CNI plugins)
Docker model:
Kubernetes model:
containers/Container_Network_Model.md
Let's create a network called dev
.
$ docker network create dev4c1ff84d6d3f1733d3e233ee039cac276f425a9d5228a4355d54878293a889ba
The network is now visible with the network ls
command:
$ docker network lsNETWORK ID NAME DRIVER6bde79dfcf70 bridge bridge8d9c78725538 none nulleb0eeab782f4 host host4c1ff84d6d3f dev bridge
containers/Container_Network_Model.md
We will create a named container on this network.
It will be reachable with its name, es
.
$ docker run -d --name es --net dev elasticsearch:28abb80e229ce8926c7223beb69699f5f34d6f1d438bfc5682db893e798046863
containers/Container_Network_Model.md
Now, create another container on this network.
$ docker run -ti --net dev alpine shroot@0ecccdfa45ef:/#
From this new container, we can resolve and ping the other one, using its assigned name:
/ # ping esPING es (172.18.0.2) 56(84) bytes of data.64 bytes from es.dev (172.18.0.2): icmp_seq=1 ttl=64 time=0.221 ms64 bytes from es.dev (172.18.0.2): icmp_seq=2 ttl=64 time=0.114 ms64 bytes from es.dev (172.18.0.2): icmp_seq=3 ttl=64 time=0.114 ms^C--- es ping statistics ---3 packets transmitted, 3 received, 0% packet loss, time 2000msrtt min/avg/max/mdev = 0.114/0.149/0.221/0.052 msroot@0ecccdfa45ef:/#
containers/Container_Network_Model.md
Since Docker Engine 1.10, name resolution is implemented by a dynamic resolver.
Archeological note: when CNM was intoduced (in Docker Engine 1.9, November 2015)
name resolution was implemented with /etc/hosts
, and it was updated each time
CONTAINERs were added/removed. This could cause interesting race conditions
since /etc/hosts
was a bind-mount (and couldn't be updated atomically).
[root@0ecccdfa45ef /]# cat /etc/hosts172.18.0.3 0ecccdfa45ef127.0.0.1 localhost::1 localhost ip6-localhost ip6-loopbackfe00::0 ip6-localnetff00::0 ip6-mcastprefixff02::1 ip6-allnodesff02::2 ip6-allrouters172.18.0.2 es172.18.0.2 es.dev
containers/Container_Network_Model.md
Service discovery with containers
(automatically generated title slide)
Let's try to run an application that requires two containers.
The first container is a web server.
The other one is a redis data store.
We will place them both on the dev
network created before.
containers/Container_Network_Model.md
The application is provided by the container image jpetazzo/trainingwheels
.
We don't know much about it so we will try to run it and see what happens!
Start the container, exposing all its ports:
$ docker run --net dev -d -P jpetazzo/trainingwheels
Check the port that has been allocated to it:
$ docker ps -l
containers/Container_Network_Model.md
redis
.Note: we're not using a FQDN or an IP address here; just redis
.
containers/Container_Network_Model.md
We need to start a Redis container.
That container must be on the same network as the web server.
It must have the right network alias (redis
) so the application can find it.
Start the container:
$ docker run --net dev --net-alias redis -d redis
containers/Container_Network_Model.md
redis
, instead of getting a DNS error, it gets the IP address of our Redis container.containers/Container_Network_Model.md
Container names are unique (there can be only one --name redis
)
Network aliases are not unique
We can have the same network alias in different networks:
docker run --net dev --net-alias redis ...docker run --net prod --net-alias redis ...
We can even have multiple containers with the same alias in the same network
(in that case, we get multiple DNS entries, aka "DNS round robin")
containers/Container_Network_Model.md
Let's try to ping our es
container from another container, when that other container is not on the dev
network.
$ docker run --rm alpine ping esping: bad address 'es'
Names can be resolved only when containers are on the same network.
Containers can contact each other only when they are on the same network (you can try to ping using the IP address to verify).
containers/Container_Network_Model.md
We would like to have another network, prod
, with its own es
container. But there can be only one container named es
!
We will use network aliases.
A container can have multiple network aliases.
Network aliases are local to a given network (only exist in this network).
Multiple containers can have the same network alias (even on the same network).
Since Docker Engine 1.11, resolving a network alias yields the IP addresses of all containers holding this alias.
containers/Container_Network_Model.md
Create the prod
network.
$ docker network create prod5a41562fecf2d8f115bedc16865f7336232a04268bdf2bd816aecca01b68d50c
We can now create multiple containers with the es
alias on the new prod
network.
$ docker run -d --name prod-es-1 --net-alias es --net prod elasticsearch:238079d21caf0c5533a391700d9e9e920724e89200083df73211081c8a356d771$ docker run -d --name prod-es-2 --net-alias es --net prod elasticsearch:21820087a9c600f43159688050dcc164c298183e1d2e62d5694fd46b10ac3bc3d
containers/Container_Network_Model.md
Let's try DNS resolution first, using the nslookup
tool that ships with the alpine
image.
$ docker run --net prod --rm alpine nslookup es.Name: esAddress 1: 172.23.0.3 prod-es-2.prodAddress 2: 172.23.0.2 prod-es-1.prod
(You can ignore the can't resolve '(null)'
errors.)
containers/Container_Network_Model.md
Each ElasticSearch instance has a name (generated when it is started). This name can be seen when we issue a simple HTTP request on the ElasticSearch API endpoint.
Try the following command a few times:
$ docker run --rm --net dev centos curl -s es:9200{ "name" : "Tarot",...}
Then try it a few times by replacing --net dev
with --net prod
:
$ docker run --rm --net prod centos curl -s es:9200{ "name" : "The Symbiote",...}
containers/Container_Network_Model.md
Docker will not create network names and aliases on the default bridge
network.
Therefore, if you want to use those features, you have to create a custom network first.
Network aliases are not unique on a given network.
i.e., multiple containers can have the same alias on the same network.
In that scenario, the Docker DNS server will return multiple records.
(i.e. you will get DNS round robin out of the box.)
Enabling Swarm Mode gives access to clustering and load balancing with IPVS.
Creation of networks and network aliases is generally automated with tools like Compose.
containers/Container_Network_Model.md
Don't rely exclusively on round robin DNS to achieve load balancing.
Many factors can affect DNS resolution, and you might see:
It's OK to use DNS to discover available endpoints, but remember that you have to re-resolve every now and then to discover new endpoints.
containers/Container_Network_Model.md
When creating a network, extra options can be provided.
--internal
disables outbound traffic (the network won't have a default gateway).
--gateway
indicates which address to use for the gateway (when outbound traffic is allowed).
--subnet
(in CIDR notation) indicates the subnet to use.
--ip-range
(in CIDR notation) indicates the subnet to allocate from.
--aux-address
allows specifying a list of reserved addresses (which won't be allocated to containers).
containers/Container_Network_Model.md
--ip
.A full example would look like this.
$ docker network create --subnet 10.66.0.0/16 pubnet42fb16ec412383db6289a3e39c3c0224f395d7f85bcb1859b279e7a564d4e135$ docker run --net pubnet --ip 10.66.66.66 -d nginxb2887adeb5578a01fd9c55c435cad56bbbe802350711d2743691f95743680b09
Note: don't hard code container IP addresses in your code!
I repeat: don't hard code container IP addresses in your code!
containers/Container_Network_Model.md
A network is managed by a driver.
The built-in drivers include:
bridge
(default)none
host
macvlan
overlay
(for Swarm clusters)More drivers can be provided by plugins (OVS, VLAN...)
A network can have a custom IPAM (IP allocator).
containers/Container_Network_Model.md
The features we've seen so far only work when all containers are on a single host.
If containers span multiple hosts, we need an overlay network to connect them together.
Docker ships with a default network plugin, overlay
, implementing an overlay network leveraging
VXLAN, enabled with Swarm Mode.
Other plugins (Weave, Calico...) can provide overlay networks as well.
Once you have an overlay network, all the features that we've used in this chapter work identically across multiple hosts.
containers/Container_Network_Model.md
Out of the scope for this intro-level workshop!
Very short instructions:
docker swarm init
then docker swarm join
on other nodes)docker network create mynet --driver overlay
docker service create --network mynet myimage
If you want to learn more about Swarm mode, you can check this video or these slides.
containers/Container_Network_Model.md
Out of the scope for this intro-level workshop!
General idea:
install the plugin (they often ship within containers)
run the plugin (if it's in a container, it will often require extra parameters; don't just docker run
it blindly!)
some plugins require configuration or activation (creating a special file that tells Docker "use the plugin whose control socket is at the following location")
you can then docker network create --driver pluginname
containers/Container_Network_Model.md
So far, we have specified which network to use when starting the container.
The Docker Engine also allows connecting and disconnecting while the container is running.
This feature is exposed through the Docker API, and through two Docker CLI commands:
docker network connect <network> <container>
docker network disconnect <network> <container>
containers/Container_Network_Model.md
We have a container named es
connected to a network named dev
.
Let's start a simple alpine container on the default network:
$ docker run -ti alpine sh/ #
In this container, try to ping the es
container:
/ # ping esping: bad address 'es'
This doesn't work, but we will change that by connecting the container.
containers/Container_Network_Model.md
Figure out the ID of our alpine container; here are two methods:
looking at /etc/hostname
in the container,
running docker ps -lq
on the host.
Run the following command on the host:
$ docker network connect dev <container_id>
containers/Container_Network_Model.md
Try again to ping es
from the container.
It should now work correctly:
/ # ping esPING es (172.20.0.3): 56 data bytes64 bytes from 172.20.0.3: seq=0 ttl=64 time=0.376 ms64 bytes from 172.20.0.3: seq=1 ttl=64 time=0.130 ms^C
Interrupt it with Ctrl-C.
containers/Container_Network_Model.md
We can look at the list of network interfaces with ifconfig
, ip a
, or ip l
:
/ # ip a1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever18: eth0@if19: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0 valid_lft forever preferred_lft forever20: eth1@if21: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP link/ether 02:42:ac:14:00:04 brd ff:ff:ff:ff:ff:ff inet 172.20.0.4/16 brd 172.20.255.255 scope global eth1 valid_lft forever preferred_lft forever/ #
Each network connection is materialized with a virtual network interface.
As we can see, we can be connected to multiple networks at the same time.
containers/Container_Network_Model.md
Let's try the symmetrical command to disconnect the container:
$ docker network disconnect dev <container_id>
From now on, if we try to ping es
, it will not resolve:
/ # ping esping: bad address 'es'
Trying to ping the IP address directly won't work either:
/ # ping 172.20.0.3... (nothing happens until we interrupt it with Ctrl-C)
containers/Container_Network_Model.md
Each network has its own set of network aliases.
We saw this earlier: es
resolves to different addresses in dev
and prod
.
If we are connected to multiple networks, the resolver looks up names in each of them (as of Docker Engine 18.03, it is the connection order) and stops as soon as the name is found.
Therefore, if we are connected to both dev
and prod
, resolving es
will not
give us the addresses of all the es
services; but only the ones in dev
or prod
.
However, we can lookup es.dev
or es.prod
if we need to.
containers/Container_Network_Model.md
We can do reverse DNS lookups on containers' IP addresses.
If the IP address belongs to a network (other than the default bridge), the result will be:
name-or-first-alias-or-container-id.network-name
Example:
$ docker run -ti --net prod --net-alias hello alpine/ # apk add --no-cache drill...OK: 5 MiB in 13 packages/ # ifconfigeth0 Link encap:Ethernet HWaddr 02:42:AC:15:00:03 inet addr:172.21.0.3 Bcast:172.21.255.255 Mask:255.255.0.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1.../ # drill -t ptr 3.0.21.172.in-addr.arpa...;; ANSWER SECTION:3.0.21.172.in-addr.arpa. 600 IN PTR hello.prod....
containers/Container_Network_Model.md
We can build a Dockerfile with a custom network with docker build --network NAME
.
This can be used to check that a build doesn't access the network.
(But keep in mind that most Dockerfiles will fail,
because they need to install remote packages and dependencies!)
This may be used to access an internal package repository.
(But try to use a multi-stage build instead, if possible!)
:EN:Container networking essentials :EN:- The Container Network Model :EN:- Container isolation :EN:- Service discovery
:FR:Mettre ses conteneurs en réseau :FR:- Le "Container Network Model" :FR:- Isolation des conteneurs :FR:- Service discovery
Local development workflow with Docker
(automatically generated title slide)
At the end of this section, you will be able to:
Share code between container and host.
Use a simple local development workflow.
containers/Local_Development_Workflow.md
We want to solve the following issues:
"Works on my machine"
"Not the same version"
"Missing dependency"
By using Docker containers, we will get a consistent development environment.
containers/Local_Development_Workflow.md
We have to work on some application whose code is at:
What is it? We don't know yet!
Let's download the code.
$ git clone https://github.com/bretfisher/namer
containers/Local_Development_Workflow.md
$ cd namer$ ls -1company_name_generator.rbconfig.rudocker-compose.ymlDockerfileGemfile
$ cd namer$ ls -1company_name_generator.rbconfig.rudocker-compose.ymlDockerfileGemfile
Aha, a Gemfile
! This is Ruby. Probably. We know this. Maybe?
containers/Local_Development_Workflow.md
Dockerfile
FROM rubyCOPY . /srcWORKDIR /srcRUN bundler installCMD ["rackup", "--host", "0.0.0.0"]EXPOSE 9292
ruby
image./src
.bundler
.rackup
.containers/Local_Development_Workflow.md
Dockerfile
!Dockerfile
!$ docker build -t namer .
Dockerfile
!$ docker build -t namer .
Dockerfile
!$ docker build -t namer .
$ docker run -dP namer
Dockerfile
!$ docker build -t namer .
$ docker run -dP namer
Dockerfile
!$ docker build -t namer .
$ docker run -dP namer
$ docker ps -l
containers/Local_Development_Workflow.md
Point our browser to our Docker node, on the port allocated to the container.
Hit "reload" a few times.
Point our browser to our Docker node, on the port allocated to the container.
Hit "reload" a few times.
This is an enterprise-class, carrier-grade, ISO-compliant company name generator!
(With 50% more b.s. than the average competition!)
(Wait, was that 50% more, or 50% less? Anyway!)
containers/Local_Development_Workflow.md
Option 1:
Option 2:
docker exec
)Option 3:
containers/Local_Development_Workflow.md
We will tell Docker to map the current directory to /src
in the container.
$ docker run -d -v $(pwd):/src -P namer
-d
: the container should run in detached mode (in the background).
-v
: the following host directory should be mounted inside the container.
-P
: publish all the ports exposed by this image.
namer
is the name of the image we will run.
We don't specify a command to run because it is already set in the Dockerfile via CMD
.
Note: on Windows, replace $(pwd)
with %cd%
(or ${pwd}
if you use PowerShell).
containers/Local_Development_Workflow.md
The -v
flag mounts a directory from your host into your Docker container.
The flag structure is:
[host-path]:[container-path]:[rw|ro]
[host-path]
and [container-path]
are created if they don't exist.
You can control the write status of the volume with the ro
and
rw
options.
If you don't specify rw
or ro
, it will be rw
by default.
containers/Local_Development_Workflow.md
The -v /path/on/host:/path/in/container
syntax is the "old" syntax
The modern syntax looks like this:
--mount type=bind,source=/path/on/host,target=/path/in/container
--mount
is more explicit, but -v
is quicker to type
--mount
supports all mount types; -v
doesn't support tmpfs
mounts
--mount
fails if the path on the host doesn't exist; -v
creates it
With the new syntax, our command becomes:
docker run --mount=type=bind,source=$(pwd),target=/src -dP namer
containers/Local_Development_Workflow.md
$ docker ps -lCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES045885b68bc5 namer rackup 3 seconds ago Up ... 0.0.0.0:32770->9292/tcp ...
containers/Local_Development_Workflow.md
Our customer really doesn't like the color of our text. Let's change it.
$ vi company_name_generator.rb
And change
color: royalblue;
To:
color: red;
containers/Local_Development_Workflow.md
Reload the application in our browser.
The color should have changed.
containers/Local_Development_Workflow.md
Volumes are not copying or synchronizing files between the host and the container
Changes made in the host are immediately visible in the container (and vice versa)
When running on Linux:
volumes and bind mounts correspond to directories on the host
if Docker runs in a Linux VM, these directories are in the Linux VM
When running on Docker Desktop:
volumes correspond to directories in a small Linux VM running Docker
access to bind mounts is translated to host filesystem access
(a bit like a network filesystem)
containers/Local_Development_Workflow.md
When running Docker natively on Linux, accessing a mount = native I/O
When running Docker Desktop, accessing a bind mount = file access translation
That file access translation has relatively good performance in general
(watch out, however, for that big npm install
working on a bind mount!)
There are some corner cases when watching files (with mechanisms like inotify)
Features like "live reload" or programs like entr
don't always behave properly
(due to e.g. file attribute caching, and other interesting details!)
containers/Local_Development_Workflow.md
(This is the title of a 2013 blog post by Chad Fowler, where he explains the concept of immutable infrastructure.)
(This is the title of a 2013 blog post by Chad Fowler, where he explains the concept of immutable infrastructure.)
Let's majorly mess up our container.
(Remove files or whatever.)
Now, how can we fix this?
(This is the title of a 2013 blog post by Chad Fowler, where he explains the concept of immutable infrastructure.)
Let's majorly mess up our container.
(Remove files or whatever.)
Now, how can we fix this?
Our old container (with the blue version of the code) is still running.
See on which port it is exposed:
docker ps
Point our browser to it to confirm that it still works fine.
containers/Local_Development_Workflow.md
Instead of updating a server, we deploy a new one.
This might be challenging with classical servers, but it's trivial with containers.
In fact, with Docker, the most logical workflow is to build a new image and run it.
If something goes wrong with the new image, we can always restart the old one.
We can even keep both versions running side by side.
If this pattern sounds interesting, you might want to read about blue/green deployment and canary deployments.
containers/Local_Development_Workflow.md
Write a Dockerfile to build an image containing our development environment.
(Rails, Django, ... and all the dependencies for our app)
Start a container from that image.
Use the -v
flag to mount our source code inside the container.
Edit the source code outside the container, using familiar tools.
(vim, emacs, textmate...)
Test the application.
(Some frameworks pick up changes automatically.
Others require you to Ctrl-C + restart after each modification.)
Iterate and repeat steps 3 and 4 until satisfied.
When done, commit+push source code changes.
containers/Local_Development_Workflow.md
Docker has a command called docker exec
.
It allows users to run a new process in a container which is already running.
If sometimes you find yourself wishing you could SSH into a container: you can use docker exec
instead.
You can get a shell prompt inside an existing container this way, or run an arbitrary process for automation.
containers/Local_Development_Workflow.md
docker exec
example$ # You can run ruby commands in the area the app is running and more!$ docker exec -it <yourContainerId> bashroot@5ca27cf74c2e:/opt/namer# irbirb(main):001:0> [0, 1, 2, 3, 4].map {|x| x ** 2}.compact=> [0, 1, 4, 9, 16]irb(main):002:0> exit
containers/Local_Development_Workflow.md
Now that we're done let's stop our container.
$ docker stop <yourContainerID>
And remove it.
$ docker rm <yourContainerID>
containers/Local_Development_Workflow.md
We've learned how to:
Share code between container and host.
Set our working directory.
Use a simple local development workflow.
:EN:Developing with containers :EN:- “Containerize” a development environment
:FR:Développer au jour le jour :FR:- « Containeriser » son environnement de développement
Compose for development stacks
(automatically generated title slide)
Dockerfile = great to build one container image.
What if we have multiple containers?
What if some of them require particular docker run
parameters?
How do we connect them all together?
... Compose solves these use-cases (and a few more).
containers/Compose_For_Dev_Stacks.md
Before we had Compose, we would typically write custom scripts to:
build container images,
run containers using these images,
connect the containers together,
rebuild, restart, update these images and containers.
containers/Compose_For_Dev_Stacks.md
Compose enables a simple, powerful onboarding workflow:
Checkout our code.
Run docker compose up
.
Our app is up and running!
containers/Compose_For_Dev_Stacks.md
(Or: when do we need something else?)
Compose is not an orchestrator
It isn't designed to need to run containers on multiple nodes
(it can, however, work with Docker Swarm Mode)
Compose isn't ideal if we want to run containers on Kubernetes
it uses different concepts (Compose services ≠ Kubernetes services)
it needs a Docker Engine (although containerd support might be coming)
there are projects to convert Compose files, like Kompose and Podman
containers/Compose_For_Dev_Stacks.md
Write Dockerfiles
Describe our stack of containers in a YAML file called docker-compose.yml
docker compose up
(or docker compose up -d
to run in the background)
Compose pulls and builds the required images, and starts the containers
Compose shows the combined logs of all the containers
(if running in the background, use docker compose logs
)
Hit Ctrl-C to stop the whole stack
(if running in the background, use docker compose stop
)
containers/Compose_For_Dev_Stacks.md
After making changes to our source code, we can:
docker compose build
to rebuild container images
docker compose up
to restart the stack with the new images
We can also combine both with docker compose up --build
Compose will be smart, and only recreate the containers that have changed.
When working with interpreted languages:
don't rebuild each time
leverage a volumes
section instead
containers/Compose_For_Dev_Stacks.md
First step: clone the source code for the app we will be working on.
git clone https://github.com/bretfisher/trainingwheelscd trainingwheels
Second step: start the app.
docker compose up
Watch Compose build and run the app.
That Compose stack exposes a web server on port 8000; try connecting to it.
containers/Compose_For_Dev_Stacks.md
We should see a web page like this:
Each time we reload, the counter should increase.
containers/Compose_For_Dev_Stacks.md
When we hit Ctrl-C, Compose tries to gracefully terminate all of the containers.
After ten seconds (or if we press ^C
again) it will forcibly kill them.
containers/Compose_For_Dev_Stacks.md
2013-2015, "fig" was created in Python by a small team outside Docker.
2015, Docker hired them, and renamed fig to Docker Compose.
It's CLI, docker-compose
, reigned as the easiest tool for running containers with YAML.
But, it was a separate pip
install, and was not Go (golang) like every other Docker tool.
2020, Docker rebuilt it in Go, making it faster and easier to install.
They call it Compose V2, and it has many new features (video walk-through)
Replace all your docker-compose
keystrokes with docker compose
.
It should have 100% backward compatibility. containers/Compose_For_Dev_Stacks.md
docker-compose.yml
fileHere is the file used in the demo:
services: www: build: www ports: - ${PORT-8000}:5000 user: nobody environment: DEBUG: 1 command: python counter.py volumes: - ./www:/src redis: image: redis
containers/Compose_For_Dev_Stacks.md
A Compose file has multiple sections:
services
is mandatory. Each service corresponds to one or more containers from the same image (replicas).
networks
is optional and indicates to which networks containers should be connected.
(By default, containers will be connected on a private, per-compose-file network.)
volumes
is optional and can define volumes to be used and/or shared by the containers.
containers/Compose_For_Dev_Stacks.md
Until 2020, Compose files has a version: x.x
key/value in each file.
The version in the file controlled what features were supported, and it was confusing.
The last version was 3.9, so you might see version: 3.x
in old docker-compose.yml
.
Now, the docker compose
CLI, and other tools, follow the Compose Spec.
All features are now supported in every file and no version is required!
If using Docker Swarm, version: 3.9
is still required. It doesn't support Compose Spec.
Note, this isn't related to tool versions, like docker compose version
.
The Docker documentation has excellent information about the Compose file format if you need to know more about versions.
docker-compose.yml
Each service in the YAML file must contain either build
, or image
.
build
indicates a path containing a Dockerfile.
image
indicates an image name (local, or on a registry).
If both are specified, an image will be built from the build
directory and named image
.
The other parameters are optional.
They encode the parameters that you would typically add to docker run
.
Sometimes they have several minor improvements.
containers/Compose_For_Dev_Stacks.md
command
indicates what to run (like CMD
in a Dockerfile).
ports
translates to one (or multiple) -p
options to map ports.
You can specify local ports (i.e. x:y
to expose public port x
).
volumes
translates to one (or multiple) -v
options.
You can use relative paths here.
Bookmark this reference doc! https://docs.docker.com/compose/compose-file/
containers/Compose_For_Dev_Stacks.md
We can use environment variables in Compose files
(like $THIS
or ${THAT}
)
We can provide default values, e.g. ${PORT-8000}
Compose will also automatically load the environment file .env
(it should contain VAR=value
, one per line)
This is a great way to customize build and run parameters
(base image versions to use, build and run secrets, port numbers...)
containers/Compose_For_Dev_Stacks.md
Follow 12-factor app configuration principles
(configure the app through environment variables)
Provide (in the repo) a default environment file suitable for development
(no secret or sensitive value)
Copy the default environment file to .env
and tweak it
(or: provide a script to generate .env
from a template)
containers/Compose_For_Dev_Stacks.md
Copy the stack in two different directories, e.g. front
and frontcopy
Compose prefixes images and containers with the directory name:
front_www
, front_www_1
, front_db_1
frontcopy_www
, frontcopy_www_1
, frontcopy_db_1
Alternatively, use docker compose -p frontcopy
(to set the --project-name
of a stack, which default to the dir name)
Each copy is isolated from the others (runs on a different network)
containers/Compose_For_Dev_Stacks.md
We have ps
, docker ps
, and similarly, docker compose ps
:
$ docker compose psName Command State Ports ----------------------------------------------------------------------------trainingwheels_redis_1 /entrypoint.sh red Up 6379/tcp trainingwheels_www_1 python counter.py Up 0.0.0.0:8000->5000/tcp
Shows the status of all the containers of our stack.
Doesn't show the other containers.
containers/Compose_For_Dev_Stacks.md
If you have started your application in the background with Compose and
want to stop it easily, you can use the kill
command:
$ docker compose kill
Likewise, docker compose rm
will let you remove containers (after confirmation):
$ docker compose rmGoing to remove trainingwheels_redis_1, trainingwheels_www_1Are you sure? [yN] yRemoving trainingwheels_redis_1...Removing trainingwheels_www_1...
containers/Compose_For_Dev_Stacks.md
Alternatively, docker compose down
will stop and remove containers.
It will also remove other resources, like networks that were created for the application.
$ docker compose downStopping trainingwheels_www_1 ... doneStopping trainingwheels_redis_1 ... doneRemoving trainingwheels_www_1 ... doneRemoving trainingwheels_redis_1 ... done
Use docker compose down -v
to remove everything including volumes.
containers/Compose_For_Dev_Stacks.md
When an image gets updated, Compose automatically creates a new container
The data in the old container is lost...
...Except if the container is using a volume
Compose will then re-attach that volume to the new container
(and data is then retained across database upgrades)
All good database images use volumes
(e.g. all official images)
containers/Compose_For_Dev_Stacks.md
Unfortunately, Docker volumes don't have labels or metadata
Compose tracks volumes thanks to their associated container
If the container is deleted, the volume gets orphaned
Example: docker compose down && docker compose up
the old volume still exists, detached from its container
a new volume gets created
docker compose down -v
/--volumes
deletes volumes
(but not docker compose down && docker compose down -v
!)
containers/Compose_For_Dev_Stacks.md
Option 1: named volumes
services: app: volumes: - data:/some/pathvolumes: data:
Volume will be named <project>_data
It won't be orphaned with docker compose down
It will correctly be removed with docker compose down -v
containers/Compose_For_Dev_Stacks.md
Option 2: relative paths
services: app: volumes: - ./data:/some/path
Makes it easy to colocate the app and its data
(for migration, backups, disk usage accounting...)
Won't be removed by docker compose down -v
containers/Compose_For_Dev_Stacks.md
Compose provides multiple features to manage complex stacks
(with many containers)
-f
/--file
/$COMPOSE_FILE
can be a list of Compose files
(separated by :
and merged together)
Services can be assigned to one or more profiles
--profile
/$COMPOSE_PROFILE
can be a list of comma-separated profiles
(see Using service profiles in the Compose documentation)
These variables can be set in .env
containers/Compose_For_Dev_Stacks.md
A service can have a depends_on
section
(listing one or more other services)
This is used when bringing up individual services
(e.g. docker compose up blah
or docker compose run foo
)
It can even wait for a service to be up and ready for connections (healthy)
services: node: depends_on: db: condition: service_healthy db: image: postgres healthcheck: test: /healthchecks/postgres-healthcheck
containers/Compose_For_Dev_Stacks.md
Exercise — writing a Compose file
(automatically generated title slide)
Let's write a Compose file for the wordsmith app!
The code is at: https://github.com/jpetazzo/wordsmith
containers/Exercise_Composefile.md
(Extra Docker content)
(automatically generated title slide)
Tips for efficient Dockerfiles
(automatically generated title slide)
We will see how to:
Reduce the number of layers.
Leverage the build cache so that builds can be faster.
Embed unit testing in the build process.
Each line in a Dockerfile
creates a new layer.
Build your Dockerfile
to take advantage of Docker's caching system.
Combine commands by using &&
to continue commands and \
to wrap lines.
Note: it is frequent to build a Dockerfile line by line:
RUN apt-get install thisthingRUN apt-get install andthatthing andthatotheroneRUN apt-get install somemorestuff
And then refactor it trivially before shipping:
RUN apt-get install thisthing andthatthing andthatotherone somemorestuff
Classic Dockerfile problem:
"each time I change a line of code, all my dependencies are re-installed!"
Solution: COPY
dependency lists (package.json
, requirements.txt
, etc.)
by themselves to avoid reinstalling unchanged dependencies every time.
Dockerfile
The dependencies are reinstalled every time, because the build system does not know if requirements.txt
has been updated.
FROM pythonWORKDIR /srcCOPY . .RUN pip install -qr requirements.txtEXPOSE 5000CMD ["python", "app.py"]
Dockerfile
Adding the dependencies as a separate step means that Docker can cache more efficiently and only install them when requirements.txt
changes.
FROM pythonWORKDIR /srcCOPY requirements.txt .RUN pip install -qr requirements.txtCOPY . .EXPOSE 5000CMD ["python", "app.py"]
chown
, chmod
, mv
Layers cannot store efficiently changes in permissions or ownership.
Layers cannot represent efficiently when a file is moved either.
As a result, operations like chown
, chown
, mv
can be expensive.
For instance, in the Dockerfile snippet below, each RUN
line
creates a layer with an entire copy of some-file
.
COPY some-file .RUN chown www-data:www-data some-fileRUN chmod 644 some-fileRUN mv some-file /var/www
How can we avoid that?
Instead of using mv
, directly put files at the right place.
When extracting archives (tar, zip...), merge operations in a single layer.
Example:
...RUN wget http://.../foo.tar.gz \ && tar -zxf foo.tar.gz \ && mv foo/fooctl /usr/local/bin \ && rm -rf foo foo.tar.gz...
COPY --chown
The Dockerfile instruction COPY
can take a --chown
parameter.
Examples:
...COPY --chown=1000 some-file .COPY --chown=1000:1000 some-file .COPY --chown=www-data:www-data some-file .
The --chown
flag can specify a user, or a user:group pair.
The user and group can be specified as names or numbers.
When using names, the names must exist in /etc/passwd
or /etc/group
.
(In the container, not on the host!)
Instead of using chmod
, set the right file permissions locally.
When files are copied with COPY
, permissions are preserved.
FROM <baseimage>RUN <install dependencies>COPY <code>RUN <build code>RUN <install test dependencies>COPY <test data sets and fixtures>RUN <unit tests>FROM <baseimage>RUN <install dependencies>COPY <code>RUN <build code>CMD, EXPOSE ...
RUN <unit tests>
fails, the build doesn't produce an imageDockerfile examples
(automatically generated title slide)
There are a number of tips, tricks, and techniques that we can use in Dockerfiles.
But sometimes, we have to use different (and even opposed) practices depending on:
the complexity of our project,
the programming language or framework that we are using,
the stage of our project (early MVP vs. super-stable production),
whether we're building a final image or a base for further images,
etc.
We are going to show a few examples using very different techniques.
When authoring official images, it is a good idea to reduce as much as possible:
the number of layers,
the size of the final image.
This is often done at the expense of build time and convenience for the image maintainer; but when an image is downloaded millions of time, saving even a few seconds of pull time can be worth it.
RUN apt-get update && apt-get install -y libpng12-dev libjpeg-dev && rm -rf /var/lib/apt/lists/* \ && docker-php-ext-configure gd --with-png-dir=/usr --with-jpeg-dir=/usr \ && docker-php-ext-install gd...RUN curl -o wordpress.tar.gz -SL https://wordpress.org/wordpress-${WORDPRESS_UPSTREAM_VERSION}.tar.gz \ && echo "$WORDPRESS_SHA1 *wordpress.tar.gz" | sha1sum -c - \ && tar -xzf wordpress.tar.gz -C /usr/src/ \ && rm wordpress.tar.gz \ && chown -R www-data:www-data /usr/src/wordpress
(Source: Wordpress official image)
Sometimes, it is better to prioritize maintainer convenience.
In particular, if:
the image changes a lot,
the image has very few users (e.g. only 1, the maintainer!),
the image is built and run on the same machine,
the image is built and run on machines with a very fast link ...
In these cases, just keep things simple!
(Next slide: a Dockerfile that can be used to preview a Jekyll / github pages site.)
FROM debian:sidRUN apt-get update -qRUN apt-get install -yq build-essential makeRUN apt-get install -yq zlib1g-devRUN apt-get install -yq ruby ruby-devRUN apt-get install -yq python-pygmentsRUN apt-get install -yq nodejsRUN apt-get install -yq cmakeRUN gem install --no-rdoc --no-ri github-pagesCOPY . /blogWORKDIR /blogVOLUME /blog/_siteEXPOSE 4000CMD ["jekyll", "serve", "--host", "0.0.0.0", "--incremental"]
Images can have a tag, indicating the version of the image.
But sometimes, there are multiple important components, and we need to indicate the versions for all of them.
This can be done with environment variables:
ENV PIP=9.0.3 \ ZC_BUILDOUT=2.11.2 \ SETUPTOOLS=38.7.0 \ PLONE_MAJOR=5.1 \ PLONE_VERSION=5.1.0 \ PLONE_MD5=76dc6cfc1c749d763c32fff3a9870d8d
(Source: Plone official image)
It is very common to define a custom entrypoint.
That entrypoint will generally be a script, performing any combination of:
pre-flights checks (if a required dependency is not available, display a nice error message early instead of an obscure one in a deep log file),
generation or validation of configuration files,
dropping privileges (with e.g. su
or gosu
, sometimes combined with chown
),
and more.
#!/bin/sh set -e # first arg is '-f' or '--some-option' # or first arg is 'something.conf' if [ "${1#-}" != "$1" ] || [ "${1%.conf}" != "$1" ]; then set -- redis-server "$@" fi # allow the container to be started with '--user' if [ "$1" = 'redis-server' -a "$(id -u)" = '0' ]; then chown -R redis . exec su-exec redis "$0" "$@" fi exec "$@"
(Source: Redis official image)
To facilitate maintenance (and avoid human errors), avoid to repeat information like:
version numbers,
remote asset URLs (e.g. source tarballs) ...
Instead, use environment variables.
ENV NODE_VERSION 10.2.1...RUN ... && curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/node-v$NODE_VERSION.tar.xz" \ && curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/SHASUMS256.txt.asc" \ && gpg --batch --decrypt --output SHASUMS256.txt SHASUMS256.txt.asc \ && grep " node-v$NODE_VERSION.tar.xz\$" SHASUMS256.txt | sha256sum -c - \ && tar -xf "node-v$NODE_VERSION.tar.xz" \ && cd "node-v$NODE_VERSION" \...
(Source: Nodejs official image)
In theory, development and production images should be the same.
In practice, we often need to enable specific behaviors in development (e.g. debug statements).
One way to reconcile both needs is to use Compose to enable these behaviors.
Let's look at the trainingwheels demo app for an example.
This Dockerfile builds an image leveraging gunicorn:
FROM pythonRUN pip install flaskRUN pip install gunicornRUN pip install redisCOPY . /srcWORKDIR /srcCMD gunicorn --bind 0.0.0.0:5000 --workers 10 counter:appEXPOSE 5000
(Source: trainingwheels Dockerfile)
This Compose file uses the same image, but with a few overrides for development:
the Flask development server is used (overriding CMD
),
the DEBUG
environment variable is set,
a volume is used to provide a faster local development workflow.
services: www: build: www ports: - 8000:5000 user: nobody environment: DEBUG: 1 command: python counter.py volumes: - ./www:/src
(Source: trainingwheels Compose file)
The main goal of containers is to make our lives easier.
In this chapter, we showed many ways to write Dockerfiles.
These Dockerfiles use sometimes diametrically opposed techniques.
Yet, they were the "right" ones for a specific situation.
It's OK (and even encouraged) to start simple and evolve as needed.
Feel free to review this chapter later (after writing a few Dockerfiles) for inspiration!
:EN:Optimizing images :EN:- Dockerfile tips, tricks, and best practices :EN:- Reducing build time :EN:- Reducing image size
:FR:Optimiser ses images :FR:- Bonnes pratiques, trucs et astuces :FR:- Réduire le temps de build :FR:- Réduire la taille des images
Reducing image size
(automatically generated title slide)
In the previous example, our final image contained:
our hello
program
its source code
the compiler
Only the first one is strictly necessary.
We are going to see how to obtain an image without the superfluous components.
containers/Multi_Stage_Builds.md
RUN
?What happens if we do one of the following commands?
RUN rm -rf ...
RUN apt-get remove ...
RUN make clean ...
RUN
?What happens if we do one of the following commands?
RUN rm -rf ...
RUN apt-get remove ...
RUN make clean ...
This adds a layer which removes a bunch of files.
But the previous layers (which added the files) still exist.
containers/Multi_Stage_Builds.md
When downloading an image, all the layers must be downloaded.
Dockerfile instruction | Layer size | Image size |
---|---|---|
FROM ubuntu |
Size of base image | Size of base image |
... |
... | Sum of this layer + all previous ones |
RUN apt-get install somepackage |
Size of files added (e.g. a few MB) |
Sum of this layer + all previous ones |
... |
... | Sum of this layer + all previous ones |
RUN apt-get remove somepackage |
Almost zero (just metadata) |
Same as previous one |
Therefore, RUN rm
does not reduce the size of the image or free up disk space.
containers/Multi_Stage_Builds.md
Various techniques are available to obtain smaller images:
collapsing layers,
adding binaries that are built outside of the Dockerfile,
squashing the final image,
multi-stage builds.
Let's review them quickly.
containers/Multi_Stage_Builds.md
You will frequently see Dockerfiles like this:
FROM ubuntuRUN apt-get update && apt-get install xxx && ... && apt-get remove xxx && ...
Or the (more readable) variant:
FROM ubuntuRUN apt-get update \ && apt-get install xxx \ && ... \ && apt-get remove xxx \ && ...
This RUN
command gives us a single layer.
The files that are added, then removed in the same layer, do not grow the layer size.
containers/Multi_Stage_Builds.md
Pros:
works on all versions of Docker
doesn't require extra tools
Cons:
not very readable
some unnecessary files might still remain if the cleanup is not thorough
that layer is expensive (slow to build)
containers/Multi_Stage_Builds.md
This results in a Dockerfile looking like this:
FROM ubuntuCOPY xxx /usr/local/bin
Of course, this implies that the file xxx
exists in the build context.
That file has to exist before you can run docker build
.
For instance, it can:
See for instance the busybox official image or this older busybox image.
containers/Multi_Stage_Builds.md
Pros:
Cons:
requires an extra build tool
we're back in dependency hell and "works on my machine"
Cons, if binary is added to code repository:
breaks portability across different platforms
grows repository size a lot if the binary is updated frequently
containers/Multi_Stage_Builds.md
The idea is to transform the final image into a single-layer image.
This can be done in (at least) two ways.
Activate experimental features and squash the final image:
docker image build --squash ...
Export/import the final image.
docker build -t temp-image .docker run --entrypoint true --name temp-container temp-imagedocker export temp-container | docker import - final-imagedocker rm temp-containerdocker rmi temp-image
containers/Multi_Stage_Builds.md
Pros:
single-layer images are smaller and faster to download
removed files no longer take up storage and network resources
Cons:
we still need to actively remove unnecessary files
squash operation can take a lot of time (on big images)
squash operation does not benefit from cache
(even if we change just a tiny file, the whole image needs to be re-squashed)
containers/Multi_Stage_Builds.md
Multi-stage builds allow us to have multiple stages.
Each stage is a separate image, and can copy files from previous stages.
We're going to see how they work in more detail.
containers/Multi_Stage_Builds.md
Multi-stage builds
(automatically generated title slide)
At any point in our Dockerfile
, we can add a new FROM
line.
This line starts a new stage of our build.
Each stage can access the files of the previous stages with COPY --from=...
.
When a build is tagged (with docker build -t ...
), the last stage is tagged.
Previous stages are not discarded: they will be used for caching, and can be referenced.
containers/Multi_Stage_Builds.md
Each stage is numbered, starting at 0
We can copy a file from a previous stage by indicating its number, e.g.:
COPY --from=0 /file/from/first/stage /location/in/current/stage
We can also name stages, and reference these names:
FROM golang AS builderRUN ...FROM alpineCOPY --from=builder /go/bin/mylittlebinary /usr/local/bin/
containers/Multi_Stage_Builds.md
We will change our Dockerfile to:
give a nickname to the first stage: compiler
add a second stage using the same ubuntu
base image
add the hello
binary to the second stage
make sure that CMD
is in the second stage
The resulting Dockerfile is on the next slide.
containers/Multi_Stage_Builds.md
Dockerfile
Here is the final Dockerfile:
FROM ubuntu AS compilerRUN apt-get updateRUN apt-get install -y build-essentialCOPY hello.c /RUN make helloFROM ubuntuCOPY --from=compiler /hello /helloCMD /hello
Let's build it, and check that it works correctly:
docker build -t hellomultistage .docker run hellomultistage
containers/Multi_Stage_Builds.md
List our images with docker images
, and check the size of:
the ubuntu
base image,
the single-stage hello
image,
the multi-stage hellomultistage
image.
We can achieve even smaller images if we use smaller base images.
However, if we use common base images (e.g. if we standardize on ubuntu
),
these common images will be pulled only once per node, so they are
virtually "free."
containers/Multi_Stage_Builds.md
We can also tag an intermediary stage with the following command:
docker build --target STAGE --tag NAME
This will create an image (named NAME
) corresponding to stage STAGE
This can be used to easily access an intermediary stage for inspection
(instead of parsing the output of docker build
to find out the image ID)
This can also be used to describe multiple images from a single Dockerfile
(instead of using multiple Dockerfiles, which could go out of sync)
:EN:Optimizing our images and their build process :EN:- Leveraging multi-stage builds
:FR:Optimiser les images et leur construction :FR:- Utilisation d'un multi-stage build
Exercise — writing better Dockerfiles
(automatically generated title slide)
Let's update our Dockerfiles to leverage multi-stage builds!
The code is at: https://github.com/jpetazzo/wordsmith
Use a different tag for these images, so that we can compare their sizes.
What's the size difference between single-stage and multi-stage builds?
containers/Exercise_Dockerfile_Advanced.md
Getting inside a container
(automatically generated title slide)
On a traditional server or VM, we sometimes need to:
log into the machine (with SSH or on the console),
analyze the disks (by removing them or rebooting with a rescue system).
In this chapter, we will see how to do that with containers.
Every once in a while, we want to log into a machine.
In an perfect world, this shouldn't be necessary.
You need to install or update packages (and their configuration)?
Use configuration management. (e.g. Ansible, Chef, Puppet, Salt...)
You need to view logs and metrics?
Collect and access them through a centralized platform.
In the real world, though ... we often need shell access!
Even without a perfect deployment system, we can do many operations without getting a shell.
Installing packages can (and should) be done in the container image.
Configuration can be done at the image level, or when the container starts.
Dynamic configuration can be stored in a volume (shared with another container).
Logs written to stdout are automatically collected by the Docker Engine.
Other logs can be written to a shared volume.
Process information and metrics are visible from the host.
Let's save logging, volumes ... for later, but let's have a look at process information!
If you run Docker on Linux, container processes are visible on the host.
$ ps faux | less
Scroll around the output of this command.
You should see the jpetazzo/clock
container.
A containerized process is just like any other process on the host.
We can use tools like lsof
, strace
, gdb
... To analyze them.
Each process (containerized or not) belongs to namespaces and cgroups.
The namespaces and cgroups determine what a process can "see" and "do".
Analogy: each process (containerized or not) runs with a specific UID (user ID).
UID=0 is root, and has elevated privileges. Other UIDs are normal users.
We will give more details about namespaces and cgroups later.
Sometimes, we need to get a shell anyway.
We could run some SSH server in the container ...
But it is easier to use docker exec
.
$ docker exec -ti ticktock sh
This creates a new process (running sh
) inside the container.
This can also be done "manually" with the tool nsenter
.
The tool that you want to run needs to exist in the container.
Some tools (like ip netns exec
) let you attach to one namespace at a time.
(This lets you e.g. setup network interfaces, even if you don't have ifconfig
or ip
in the container.)
Most importantly: the container needs to be running.
What if the container is stopped or crashed?
A stopped container is only storage (like a disk drive).
We cannot SSH into a disk drive or USB stick!
We need to connect the disk to a running machine.
How does that translate into the container world?
As an exercise, we are going to try to find out what's wrong with jpetazzo/crashtest
.
docker run jpetazzo/crashtest
The container starts, but then stops immediately, without any output.
What would MacGyver™ do?
First, let's check the status of that container.
docker ps -l
docker diff
to see files that were added / changed / removed.docker diff <container_id>
The container ID was shown by docker ps -l
.
We can also see it with docker ps -lq
.
The output of docker diff
shows some interesting log files!
docker cp
.docker cp <container_id>:/var/log/nginx/error.log .
cat error.log
(The directory /run/nginx
doesn't exist.)
We can restart a container with docker start
...
... But it will probably crash again immediately!
We cannot specify a different program to run with docker start
But we can create a new image from the crashed container
docker commit <container_id> debugimage
docker run -ti --entrypoint sh debugimage
We can also dump the entire filesystem of a container.
This is done with docker export
.
It generates a tar archive.
docker export <container_id> | tar tv
This will give a detailed listing of the content of the container.
:EN:- Troubleshooting and getting inside a container :FR:- Inspecter un conteneur en détail, en live ou post-mortem
Restarting and attaching to containers
(automatically generated title slide)
We have started containers in the foreground, and in the background.
In this chapter, we will see how to:
containers/Start_And_Attach.md
The distinction between foreground and background containers is arbitrary.
From Docker's point of view, all containers are the same.
All containers run the same way, whether there is a client attached to them or not.
It is always possible to detach from a container, and to reattach to a container.
Analogy: attaching to a container is like plugging a keyboard and screen to a physical server.
containers/Start_And_Attach.md
If you have started an interactive container (with option -it
), you can detach from it.
The "detach" sequence is ^P^Q
.
Otherwise you can detach by killing the Docker client.
(But not by hitting ^C
, as this would deliver SIGINT
to the container.)
What does -it
stand for?
-t
means "allocate a terminal."-i
means "connect stdin to the terminal."containers/Start_And_Attach.md
Docker for Windows has a different detach experience due to shell features.
^P^Q
does not work.
^C
will detach, rather than stop the container.
Using Bash, Subsystem for Linux, etc. on Windows behaves like Linux/macOS shells.
Both PowerShell and Bash work well in Win 10; just be aware of differences.
containers/Start_And_Attach.md
^P^Q
? No problem!docker run --detach-keys
.Start a container with a custom detach command:
$ docker run -ti --detach-keys ctrl-x,x jpetazzo/clock
Detach by hitting ^X x
. (This is ctrl-x then x, not ctrl-x twice!)
Check that our container is still running:
$ docker ps -l
containers/Start_And_Attach.md
You can attach to a container:
$ docker attach <containerID>
--detach-keys
when attaching, it defaults back to ^P^Q
.Try it on our previous container:
$ docker attach $(docker ps -lq)
Check that ^X x
doesn't work, but ^P ^Q
does.
containers/Start_And_Attach.md
Warning: if the container was started without -it
...
^P^Q
.^C
, the signal will be proxied to the container.Remember: you can always detach by killing the Docker client.
containers/Start_And_Attach.md
Use docker attach
if you intend to send input to the container.
If you just want to see the output of a container, use docker logs
.
$ docker logs --tail 1 --follow <containerID>
containers/Start_And_Attach.md
When a container has exited, it is in stopped state.
It can then be restarted with the start
command.
$ docker start <yourContainerID>
The container will be restarted using the same options you launched it with.
You can re-attach to it if you want to interact with it:
$ docker attach <yourContainerID>
Use docker ps -a
to identify the container ID of a previous jpetazzo/clock
container,
and try those commands.
containers/Start_And_Attach.md
REPL = Read Eval Print Loop
Shells, interpreters, TUI ...
Symptom: you docker attach
, and see nothing
The REPL doesn't know that you just attached, and doesn't print anything
Try hitting ^L
or Enter
containers/Start_And_Attach.md
When you docker attach
, the Docker Engine sends SIGWINCH signals to the container.
SIGWINCH = WINdow CHange; indicates a change in window size.
This will cause some CLI and TUI programs to redraw the screen.
But not all of them.
:EN:- Restarting old containers :EN:- Detaching and reattaching to container :FR:- Redémarrer des anciens conteneurs :FR:- Se détacher et rattacher à des conteneurs
Naming and inspecting containers
(automatically generated title slide)
In this lesson, we will learn about an important Docker concept: container naming.
Naming allows us to:
Reference easily a container.
Ensure unicity of a specific container.
We will also see the inspect
command, which gives a lot of details about a container.
containers/Naming_And_Inspecting.md
So far, we have referenced containers with their ID.
We have copy-pasted the ID, or used a shortened prefix.
But each container can also be referenced by its name.
If a container is named thumbnail-worker
, I can do:
$ docker logs thumbnail-worker$ docker stop thumbnail-workeretc.
containers/Naming_And_Inspecting.md
When we create a container, if we don't give a specific name, Docker will pick one for us.
It will be the concatenation of:
A mood (furious, goofy, suspicious, boring...)
The name of a famous inventor (tesla, darwin, wozniak...)
Examples: happy_curie
, clever_hopper
, jovial_lovelace
...
containers/Naming_And_Inspecting.md
You can set the name of the container when you create it.
$ docker run --name ticktock jpetazzo/clock
If you specify a name that already exists, Docker will refuse to create the container.
This lets us enforce unicity of a given resource.
containers/Naming_And_Inspecting.md
You can rename containers with docker rename
.
This allows you to "free up" a name without destroying the associated container.
containers/Naming_And_Inspecting.md
The docker inspect
command will output a very detailed JSON map.
$ docker inspect <containerID>[{...(many pages of JSON here)...
There are multiple ways to consume that information.
containers/Naming_And_Inspecting.md
You could grep and cut or awk the output of docker inspect
.
Please, don't.
It's painful.
If you really must parse JSON from the Shell, use JQ! (It's great.)
$ docker inspect <containerID> | jq .
containers/Naming_And_Inspecting.md
--format
You can specify a format string, which will be parsed by Go's text/template package.
$ docker inspect --format '{{ json .Created }}' <containerID>"2015-02-24T07:21:11.712240394Z"
The generic syntax is to wrap the expression with double curly braces.
The expression starts with a dot representing the JSON object.
Then each field or member can be accessed in dotted notation syntax.
The optional json
keyword asks for valid JSON output.
(e.g. here it adds the surrounding double-quotes.)
:EN:Managing container lifecycle :EN:- Naming and inspecting containers
:FR:Suivre ses conteneurs à la loupe :FR:- Obtenir des informations détaillées sur un conteneur :FR:- Associer un identifiant unique à un conteneur
Labels
(automatically generated title slide)
Labels allow to attach arbitrary metadata to containers.
Labels are key/value pairs.
They are specified at container creation.
You can query them with docker inspect
.
They can also be used as filters with some commands (e.g. docker ps
).
Let's create a few containers with a label owner
.
docker run -d -l owner=alice nginxdocker run -d -l owner=bob nginxdocker run -d -l owner nginx
We didn't specify a value for the owner
label in the last example.
This is equivalent to setting the value to be an empty string.
We can view the labels with docker inspect
.
$ docker inspect $(docker ps -lq) | grep -A3 Labels "Labels": { "maintainer": "NGINX Docker Maintainers <docker-maint@nginx.com>", "owner": "" },
We can use the --format
flag to list the value of a label.
$ docker inspect $(docker ps -q) --format 'OWNER={{.Config.Labels.owner}}'
We can list containers having a specific label.
$ docker ps --filter label=owner
Or we can list containers having a specific label with a specific value.
$ docker ps --filter label=owner=alice
HTTP vhost of a web app or web service.
(The label is used to generate the configuration for NGINX, HAProxy, etc.)
Backup schedule for a stateful service.
(The label is used by a cron job to determine if/when to backup container data.)
Service ownership.
(To determine internal cross-billing, or who to page in case of outage.)
etc.
:EN:- Using labels to identify containers :FR:- Étiqueter ses conteneurs avec des méta-données
Advanced Dockerfile Syntax
(automatically generated title slide)
We have seen simple Dockerfiles to illustrate how Docker build container images.
In this section, we will give a recap of the Dockerfile syntax, and introduce advanced Dockerfile commands that we might come across sometimes; or that we might want to use in some specific scenarios.
containers/Advanced_Dockerfiles.md
Dockerfile
usage summaryDockerfile
instructions are executed in order.
Each instruction creates a new layer in the image.
Docker maintains a cache with the layers of previous builds.
When there are no changes in the instructions and files making a layer, the builder re-uses the cached layer, without executing the instruction for that layer.
The FROM
instruction MUST be the first non-comment instruction.
Lines starting with #
are treated as comments.
Some instructions (like CMD
or ENTRYPOINT
) update a piece of metadata.
(As a result, each call to these instructions makes the previous one useless.)
containers/Advanced_Dockerfiles.md
RUN
instructionThe RUN
instruction can be specified in two ways.
With shell wrapping, which runs the specified command inside a shell,
with /bin/sh -c
:
RUN apt-get update
Or using the exec
method, which avoids shell string expansion, and
allows execution in images that don't have /bin/sh
:
RUN [ "apt-get", "update" ]
containers/Advanced_Dockerfiles.md
RUN
instructionRUN
will do the following:
RUN
will NOT do the following:
If you want to start something automatically when the container runs,
you should use CMD
and/or ENTRYPOINT
.
containers/Advanced_Dockerfiles.md
It is possible to execute multiple commands in a single step:
RUN apt-get update && apt-get install -y wget && apt-get clean
It is also possible to break a command onto multiple lines:
RUN apt-get update \ && apt-get install -y wget \ && apt-get clean
containers/Advanced_Dockerfiles.md
EXPOSE
instructionThe EXPOSE
instruction tells Docker what ports are to be published
in this image.
EXPOSE 8080EXPOSE 80 443EXPOSE 53/tcp 53/udp
All ports are private by default.
Declaring a port with EXPOSE
is not enough to make it public.
The Dockerfile
doesn't control on which port a service gets exposed.
containers/Advanced_Dockerfiles.md
When you docker run -p <port> ...
, that port becomes public.
(Even if it was not declared with EXPOSE
.)
When you docker run -P ...
(without port number), all ports
declared with EXPOSE
become public.
A public port is reachable from other containers and from outside the host.
A private port is not reachable from outside.
containers/Advanced_Dockerfiles.md
COPY
instructionThe COPY
instruction adds files and content from your host into the
image.
COPY . /src
This will add the contents of the build context (the directory
passed as an argument to docker build
) to the directory /src
in the container.
containers/Advanced_Dockerfiles.md
Note: you can only reference files and directories inside the build context. Absolute paths are taken as being anchored to the build context, so the two following lines are equivalent:
COPY . /srcCOPY / /src
Attempts to use ..
to get out of the build context will be
detected and blocked with Docker, and the build will fail.
Otherwise, a Dockerfile
could succeed on host A, but fail on host B.
containers/Advanced_Dockerfiles.md
ADD
ADD
works almost like COPY
, but has a few extra features.
ADD
can get remote files:
ADD http://www.example.com/webapp.jar /opt/
This would download the webapp.jar
file and place it in the /opt
directory.
ADD
will automatically unpack zip files and tar archives:
ADD ./assets.zip /var/www/htdocs/assets/
This would unpack assets.zip
into /var/www/htdocs/assets
.
However, ADD
will not automatically unpack remote archives.
containers/Advanced_Dockerfiles.md
ADD
, COPY
, and the build cacheBefore creating a new layer, Docker checks its build cache.
For most Dockerfile instructions, Docker only looks at the
Dockerfile
content to do the cache lookup.
For ADD
and COPY
instructions, Docker also checks if the files
to be added to the container have been changed.
ADD
always needs to download the remote file before
it can check if it has been changed.
(It cannot use, e.g., ETags or If-Modified-Since headers.)
containers/Advanced_Dockerfiles.md
VOLUME
The VOLUME
instruction tells Docker that a specific directory
should be a volume.
VOLUME /var/lib/mysql
Filesystem access in volumes bypasses the copy-on-write layer, offering native performance to I/O done in those directories.
Volumes can be attached to multiple containers, allowing to "port" data over from a container to another, e.g. to upgrade a database to a newer version.
It is possible to start a container in "read-only" mode. The container filesystem will be made read-only, but volumes can still have read/write access if necessary.
containers/Advanced_Dockerfiles.md
WORKDIR
instructionThe WORKDIR
instruction sets the working directory for subsequent
instructions.
It also affects CMD
and ENTRYPOINT
, since it sets the working
directory used when starting the container.
WORKDIR /src
You can specify WORKDIR
again to change the working directory for
further operations.
containers/Advanced_Dockerfiles.md
ENV
instructionThe ENV
instruction specifies environment variables that should be
set in any container launched from the image.
ENV WEBAPP_PORT 8080
This will result in an environment variable being created in any containers created from this image of
WEBAPP_PORT=8080
You can also specify environment variables when you use docker run
.
$ docker run -e WEBAPP_PORT=8000 -e WEBAPP_HOST=www.example.com ...
containers/Advanced_Dockerfiles.md
USER
instructionThe USER
instruction sets the user name or UID to use when running
the image.
It can be used multiple times to change back to root or to another user.
containers/Advanced_Dockerfiles.md
CMD
instructionThe CMD
instruction is a default command run when a container is
launched from the image.
CMD [ "nginx", "-g", "daemon off;" ]
Means we don't need to specify nginx -g "daemon off;"
when running the
container.
Instead of:
$ docker run <dockerhubUsername>/web_image nginx -g "daemon off;"
We can just do:
$ docker run <dockerhubUsername>/web_image
containers/Advanced_Dockerfiles.md
CMD
instructionJust like RUN
, the CMD
instruction comes in two forms.
The first executes in a shell:
CMD nginx -g "daemon off;"
The second executes directly, without shell processing:
CMD [ "nginx", "-g", "daemon off;" ]
containers/Advanced_Dockerfiles.md
CMD
instructionThe CMD
can be overridden when you run a container.
$ docker run -it <dockerhubUsername>/web_image bash
Will run bash
instead of nginx -g "daemon off;"
.
containers/Advanced_Dockerfiles.md
ENTRYPOINT
instructionThe ENTRYPOINT
instruction is like the CMD
instruction,
but arguments given on the command line are appended to the
entry point.
Note: you have to use the "exec" syntax ([ "..." ]
).
ENTRYPOINT [ "/bin/ls" ]
If we were to run:
$ docker run training/ls -l
Instead of trying to run -l
, the container will run /bin/ls -l
.
containers/Advanced_Dockerfiles.md
ENTRYPOINT
instructionThe entry point can be overridden as well.
$ docker run -it training/lsbin dev home lib64 mnt proc run srv tmp varboot etc lib media opt root sbin sys usr$ docker run -it --entrypoint bash training/lsroot@d902fb7b1fc7:/#
containers/Advanced_Dockerfiles.md
CMD
and ENTRYPOINT
interactThe CMD
and ENTRYPOINT
instructions work best when used
together.
ENTRYPOINT [ "nginx" ]CMD [ "-g", "daemon off;" ]
The ENTRYPOINT
specifies the command to be run and the CMD
specifies its options. On the command line we can then potentially
override the options when needed.
$ docker run -d <dockerhubUsername>/web_image -t
This will override the options CMD
provided with new flags.
containers/Advanced_Dockerfiles.md
ONBUILD
lets you stash instructions that will be executed
when this image is used as a base for another one.LABEL
adds arbitrary metadata to the image.ARG
defines build-time variables (optional or mandatory).STOPSIGNAL
sets the signal for docker stop
(TERM
by default).HEALTHCHECK
defines a command assessing the status of the container.SHELL
sets the default program to use for string-syntax RUN, CMD, etc.containers/Advanced_Dockerfiles.md
ONBUILD
instructionThe ONBUILD
instruction is a trigger. It sets instructions that will
be executed when another image is built from the image being build.
This is useful for building images which will be used as a base to build other images.
ONBUILD COPY . /src
ONBUILD
instructions with ONBUILD
.ONBUILD
can't be used to trigger FROM
instructions.:EN:- Advanced Dockerfile syntax :FR:- Dockerfile niveau expert
Container network drivers
(automatically generated title slide)
The Docker Engine supports different network drivers.
The built-in drivers include:
bridge
(default)
null
(for the special network called none
)
host
(for the special network called host
)
container
(that one is a bit magic!)
The network is selected with docker run --net ...
.
Each network is managed by a driver.
The different drivers are explained with more details on the following slides.
By default, the container gets a virtual eth0
interface.
(In addition to its own private lo
loopback interface.)
That interface is provided by a veth
pair.
It is connected to the Docker bridge.
(Named docker0
by default; configurable with --bridge
.)
Addresses are allocated on a private, internal subnet.
(Docker uses 172.17.0.0/16 by default; configurable with --bip
.)
Outbound traffic goes through an iptables MASQUERADE rule.
Inbound traffic goes through an iptables DNAT rule.
The container can have its own routes, iptables rules, etc.
Container is started with docker run --net none ...
It only gets the lo
loopback interface. No eth0
.
It can't send or receive network traffic.
Useful for isolated/untrusted workloads.
Container is started with docker run --net host ...
It sees (and can access) the network interfaces of the host.
It can bind any address, any port (for ill and for good).
Network traffic doesn't have to go through NAT, bridge, or veth.
Performance = native!
Use cases:
Performance sensitive applications (VOIP, gaming, streaming...)
Peer discovery (e.g. Erlang port mapper, Raft, Serf...)
Container is started with docker run --net container:id ...
It re-uses the network stack of another container.
It shares with this other container the same interfaces, IP address(es), routes, iptables rules, etc.
Those containers can communicate over their lo
interface.
(i.e. one can bind to 127.0.0.1 and the others can connect to it.)
:EN:Advanced container networking :EN:- Transparent network access with the "host" driver :EN:- Sharing is caring with the "container" driver
:FR:Paramétrage réseau avancé :FR:- Accès transparent au réseau avec le mode "host" :FR:- Partage de la pile réseau avece le mode "container"
Open these slides: https://tampa.bretfisher.com/
Get a server: I provisioned one for each. Ask me for the IPs.
Access your server over SSH ssh docker@w.x.y.z
or WebSSH (http://w.x.y.z:8080)
Let me know if you can't get in, we have multiple backup options!
Note
This is hands on. You'll want to do most of these commands with me.
These slides are take-home.
Keyboard shortcuts
↑, ←, Pg Up, k | Go to previous slide |
↓, →, Pg Dn, Space, j | Go to next slide |
Home | Go to first slide |
End | Go to last slide |
Number + Return | Go to specific slide |
b / m / f | Toggle blackout / mirrored / fullscreen mode |
c | Clone slideshow |
p | Toggle presenter mode |
t | Restart the presentation timer |
?, h | Toggle this help |
Esc | Back to slideshow |