RUN RUN COPY RUN FROM RUN COPY RUN CMD, EXPOSE ... ``` * The build fails as soon as an instruction fails * If `RUN ` fails, the build doesn't produce an image * If it succeeds, it produces a clean image (without test libraries and data) .debug[[containers/Dockerfile_Tips.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Dockerfile_Tips.md)] --- class: pic .interstitial[] --- name: toc-dockerfile-examples class: title Dockerfile examples .nav[ [Previous part](#toc-tips-for-efficient-dockerfiles) | [Back to table of contents](#toc-part-3) | [Next part](#toc-reducing-image-size) ] .debug[(automatically generated title slide)] --- # Dockerfile examples There are a number of tips, tricks, and techniques that we can use in Dockerfiles. But sometimes, we have to use different (and even opposed) practices depending on: - the complexity of our project, - the programming language or framework that we are using, - the stage of our project (early MVP vs. super-stable production), - whether we're building a final image or a base for further images, - etc. We are going to show a few examples using very different techniques. .debug[[containers/Dockerfile_Tips.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Dockerfile_Tips.md)] --- ## When to optimize an image When authoring official images, it is a good idea to reduce as much as possible: - the number of layers, - the size of the final image. This is often done at the expense of build time and convenience for the image maintainer; but when an image is downloaded millions of time, saving even a few seconds of pull time can be worth it. .small[ ```dockerfile RUN apt-get update && apt-get install -y libpng12-dev libjpeg-dev && rm -rf /var/lib/apt/lists/* \ && docker-php-ext-configure gd --with-png-dir=/usr --with-jpeg-dir=/usr \ && docker-php-ext-install gd ... RUN curl -o wordpress.tar.gz -SL https://wordpress.org/wordpress-${WORDPRESS_UPSTREAM_VERSION}.tar.gz \ && echo "$WORDPRESS_SHA1 *wordpress.tar.gz" | sha1sum -c - \ && tar -xzf wordpress.tar.gz -C /usr/src/ \ && rm wordpress.tar.gz \ && chown -R www-data:www-data /usr/src/wordpress ``` ] (Source: [Wordpress official image](https://github.com/docker-library/wordpress/blob/618490d4bdff6c5774b84b717979bfe3d6ba8ad1/apache/Dockerfile)) .debug[[containers/Dockerfile_Tips.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Dockerfile_Tips.md)] --- ## When to *not* optimize an image Sometimes, it is better to prioritize *maintainer convenience*. In particular, if: - the image changes a lot, - the image has very few users (e.g. only 1, the maintainer!), - the image is built and run on the same machine, - the image is built and run on machines with a very fast link ... In these cases, just keep things simple! (Next slide: a Dockerfile that can be used to preview a Jekyll / github pages site.) .debug[[containers/Dockerfile_Tips.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Dockerfile_Tips.md)] --- ```dockerfile FROM debian:sid RUN apt-get update -q RUN apt-get install -yq build-essential make RUN apt-get install -yq zlib1g-dev RUN apt-get install -yq ruby ruby-dev RUN apt-get install -yq python-pygments RUN apt-get install -yq nodejs RUN apt-get install -yq cmake RUN gem install --no-rdoc --no-ri github-pages COPY . /blog WORKDIR /blog VOLUME /blog/_site EXPOSE 4000 CMD ["jekyll", "serve", "--host", "0.0.0.0", "--incremental"] ``` .debug[[containers/Dockerfile_Tips.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Dockerfile_Tips.md)] --- ## Multi-dimensional versioning systems Images can have a tag, indicating the version of the image. But sometimes, there are multiple important components, and we need to indicate the versions for all of them. This can be done with environment variables: ```dockerfile ENV PIP=9.0.3 \ ZC_BUILDOUT=2.11.2 \ SETUPTOOLS=38.7.0 \ PLONE_MAJOR=5.1 \ PLONE_VERSION=5.1.0 \ PLONE_MD5=76dc6cfc1c749d763c32fff3a9870d8d ``` (Source: [Plone official image](https://github.com/plone/plone.docker/blob/master/5.1/5.1.0/alpine/Dockerfile)) .debug[[containers/Dockerfile_Tips.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Dockerfile_Tips.md)] --- ## Entrypoints and wrappers It is very common to define a custom entrypoint. That entrypoint will generally be a script, performing any combination of: - pre-flights checks (if a required dependency is not available, display a nice error message early instead of an obscure one in a deep log file), - generation or validation of configuration files, - dropping privileges (with e.g. `su` or `gosu`, sometimes combined with `chown`), - and more. .debug[[containers/Dockerfile_Tips.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Dockerfile_Tips.md)] --- ## A typical entrypoint script ```dockerfile #!/bin/sh set -e # first arg is '-f' or '--some-option' # or first arg is 'something.conf' if [ "${1#-}" != "$1" ] || [ "${1%.conf}" != "$1" ]; then set -- redis-server "$@" fi # allow the container to be started with '--user' if [ "$1" = 'redis-server' -a "$(id -u)" = '0' ]; then chown -R redis . exec su-exec redis "$0" "$@" fi exec "$@" ``` (Source: [Redis official image](https://github.com/docker-library/redis/blob/d24f2be82673ccef6957210cc985e392ebdc65e4/4.0/alpine/docker-entrypoint.sh)) .debug[[containers/Dockerfile_Tips.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Dockerfile_Tips.md)] --- ## Factoring information To facilitate maintenance (and avoid human errors), avoid to repeat information like: - version numbers, - remote asset URLs (e.g. source tarballs) ... Instead, use environment variables. .small[ ```dockerfile ENV NODE_VERSION 10.2.1 ... RUN ... && curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/node-v$NODE_VERSION.tar.xz" \ && curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/SHASUMS256.txt.asc" \ && gpg --batch --decrypt --output SHASUMS256.txt SHASUMS256.txt.asc \ && grep " node-v$NODE_VERSION.tar.xz\$" SHASUMS256.txt | sha256sum -c - \ && tar -xf "node-v$NODE_VERSION.tar.xz" \ && cd "node-v$NODE_VERSION" \ ... ``` ] (Source: [Nodejs official image](https://github.com/nodejs/docker-node/blob/master/10/alpine/Dockerfile)) .debug[[containers/Dockerfile_Tips.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Dockerfile_Tips.md)] --- ## Overrides In theory, development and production images should be the same. In practice, we often need to enable specific behaviors in development (e.g. debug statements). One way to reconcile both needs is to use Compose to enable these behaviors. Let's look at the [trainingwheels](https://github.com/bretfisher/trainingwheels) demo app for an example. .debug[[containers/Dockerfile_Tips.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Dockerfile_Tips.md)] --- ## Production image This Dockerfile builds an image leveraging gunicorn: ```dockerfile FROM python RUN pip install flask RUN pip install gunicorn RUN pip install redis COPY . /src WORKDIR /src CMD gunicorn --bind 0.0.0.0:5000 --workers 10 counter:app EXPOSE 5000 ``` (Source: [trainingwheels Dockerfile](https://github.com/bretfisher/trainingwheels/blob/master/www/Dockerfile)) .debug[[containers/Dockerfile_Tips.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Dockerfile_Tips.md)] --- ## Development Compose file This Compose file uses the same image, but with a few overrides for development: - the Flask development server is used (overriding `CMD`), - the `DEBUG` environment variable is set, - a volume is used to provide a faster local development workflow. .small[ ```yaml services: www: build: www ports: - 8000:5000 user: nobody environment: DEBUG: 1 command: python counter.py volumes: - ./www:/src ``` ] (Source: [trainingwheels Compose file](https://github.com/bretfisher/trainingwheels/blob/master/docker-compose.yml)) .debug[[containers/Dockerfile_Tips.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Dockerfile_Tips.md)] --- ## How to know which best practices are better? - The main goal of containers is to make our lives easier. - In this chapter, we showed many ways to write Dockerfiles. - These Dockerfiles use sometimes diametrically opposed techniques. - Yet, they were the "right" ones *for a specific situation.* - It's OK (and even encouraged) to start simple and evolve as needed. - Feel free to review this chapter later (after writing a few Dockerfiles) for inspiration! ??? :EN:Optimizing images :EN:- Dockerfile tips, tricks, and best practices :EN:- Reducing build time :EN:- Reducing image size :FR:Optimiser ses images :FR:- Bonnes pratiques, trucs et astuces :FR:- Réduire le temps de build :FR:- Réduire la taille des images .debug[[containers/Dockerfile_Tips.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Dockerfile_Tips.md)] --- class: pic .interstitial[] --- name: toc-reducing-image-size class: title Reducing image size .nav[ [Previous part](#toc-dockerfile-examples) | [Back to table of contents](#toc-part-3) | [Next part](#toc-multi-stage-builds) ] .debug[(automatically generated title slide)] --- # Reducing image size * In the previous example, our final image contained: * our `hello` program * its source code * the compiler * Only the first one is strictly necessary. * We are going to see how to obtain an image without the superfluous components. .debug[[containers/Multi_Stage_Builds.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Multi_Stage_Builds.md)] --- ## Can't we remove superfluous files with `RUN`? What happens if we do one of the following commands? - `RUN rm -rf ...` - `RUN apt-get remove ...` - `RUN make clean ...` -- This adds a layer which removes a bunch of files. But the previous layers (which added the files) still exist. .debug[[containers/Multi_Stage_Builds.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Multi_Stage_Builds.md)] --- ## Removing files with an extra layer When downloading an image, all the layers must be downloaded. | Dockerfile instruction | Layer size | Image size | | ---------------------- | ---------- | ---------- | | `FROM ubuntu` | Size of base image | Size of base image | | `...` | ... | Sum of this layer + all previous ones | | `RUN apt-get install somepackage` | Size of files added (e.g. a few MB) | Sum of this layer + all previous ones | | `...` | ... | Sum of this layer + all previous ones | | `RUN apt-get remove somepackage` | Almost zero (just metadata) | Same as previous one | Therefore, `RUN rm` does not reduce the size of the image or free up disk space. .debug[[containers/Multi_Stage_Builds.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Multi_Stage_Builds.md)] --- ## Removing unnecessary files Various techniques are available to obtain smaller images: - collapsing layers, - adding binaries that are built outside of the Dockerfile, - squashing the final image, - multi-stage builds. Let's review them quickly. .debug[[containers/Multi_Stage_Builds.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Multi_Stage_Builds.md)] --- ## Collapsing layers You will frequently see Dockerfiles like this: ```dockerfile FROM ubuntu RUN apt-get update && apt-get install xxx && ... && apt-get remove xxx && ... ``` Or the (more readable) variant: ```dockerfile FROM ubuntu RUN apt-get update \ && apt-get install xxx \ && ... \ && apt-get remove xxx \ && ... ``` This `RUN` command gives us a single layer. The files that are added, then removed in the same layer, do not grow the layer size. .debug[[containers/Multi_Stage_Builds.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Multi_Stage_Builds.md)] --- ## Collapsing layers: pros and cons Pros: - works on all versions of Docker - doesn't require extra tools Cons: - not very readable - some unnecessary files might still remain if the cleanup is not thorough - that layer is expensive (slow to build) .debug[[containers/Multi_Stage_Builds.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Multi_Stage_Builds.md)] --- ## Building binaries outside of the Dockerfile This results in a Dockerfile looking like this: ```dockerfile FROM ubuntu COPY xxx /usr/local/bin ``` Of course, this implies that the file `xxx` exists in the build context. That file has to exist before you can run `docker build`. For instance, it can: - exist in the code repository, - be created by another tool (script, Makefile...), - be created by another container image and extracted from the image. See for instance the [busybox official image](https://github.com/docker-library/busybox/blob/fe634680e32659aaf0ee0594805f74f332619a90/musl/Dockerfile) or this [older busybox image](https://github.com/jpetazzo/docker-busybox). .debug[[containers/Multi_Stage_Builds.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Multi_Stage_Builds.md)] --- ## Building binaries outside: pros and cons Pros: - final image can be very small Cons: - requires an extra build tool - we're back in dependency hell and "works on my machine" Cons, if binary is added to code repository: - breaks portability across different platforms - grows repository size a lot if the binary is updated frequently .debug[[containers/Multi_Stage_Builds.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Multi_Stage_Builds.md)] --- ## Squashing the final image The idea is to transform the final image into a single-layer image. This can be done in (at least) two ways. - Activate experimental features and squash the final image: ```bash docker image build --squash ... ``` - Export/import the final image. ```bash docker build -t temp-image . docker run --entrypoint true --name temp-container temp-image docker export temp-container | docker import - final-image docker rm temp-container docker rmi temp-image ``` .debug[[containers/Multi_Stage_Builds.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Multi_Stage_Builds.md)] --- ## Squashing the image: pros and cons Pros: - single-layer images are smaller and faster to download - removed files no longer take up storage and network resources Cons: - we still need to actively remove unnecessary files - squash operation can take a lot of time (on big images) - squash operation does not benefit from cache (even if we change just a tiny file, the whole image needs to be re-squashed) .debug[[containers/Multi_Stage_Builds.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Multi_Stage_Builds.md)] --- ## Multi-stage builds Multi-stage builds allow us to have multiple *stages*. Each stage is a separate image, and can copy files from previous stages. We're going to see how they work in more detail. .debug[[containers/Multi_Stage_Builds.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Multi_Stage_Builds.md)] --- class: pic .interstitial[] --- name: toc-multi-stage-builds class: title Multi-stage builds .nav[ [Previous part](#toc-reducing-image-size) | [Back to table of contents](#toc-part-3) | [Next part](#toc-exercise--writing-better-dockerfiles) ] .debug[(automatically generated title slide)] --- # Multi-stage builds * At any point in our `Dockerfile`, we can add a new `FROM` line. * This line starts a new stage of our build. * Each stage can access the files of the previous stages with `COPY --from=...`. * When a build is tagged (with `docker build -t ...`), the last stage is tagged. * Previous stages are not discarded: they will be used for caching, and can be referenced. .debug[[containers/Multi_Stage_Builds.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Multi_Stage_Builds.md)] --- ## Multi-stage builds in practice * Each stage is numbered, starting at `0` * We can copy a file from a previous stage by indicating its number, e.g.: ```dockerfile COPY --from=0 /file/from/first/stage /location/in/current/stage ``` * We can also name stages, and reference these names: ```dockerfile FROM golang AS builder RUN ... FROM alpine COPY --from=builder /go/bin/mylittlebinary /usr/local/bin/ ``` .debug[[containers/Multi_Stage_Builds.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Multi_Stage_Builds.md)] --- ## Multi-stage builds for our C program We will change our Dockerfile to: * give a nickname to the first stage: `compiler` * add a second stage using the same `ubuntu` base image * add the `hello` binary to the second stage * make sure that `CMD` is in the second stage The resulting Dockerfile is on the next slide. .debug[[containers/Multi_Stage_Builds.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Multi_Stage_Builds.md)] --- ## Multi-stage build `Dockerfile` Here is the final Dockerfile: ```dockerfile FROM ubuntu AS compiler RUN apt-get update RUN apt-get install -y build-essential COPY hello.c / RUN make hello FROM ubuntu COPY --from=compiler /hello /hello CMD /hello ``` Let's build it, and check that it works correctly: ```bash docker build -t hellomultistage . docker run hellomultistage ``` .debug[[containers/Multi_Stage_Builds.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Multi_Stage_Builds.md)] --- ## Comparing single/multi-stage build image sizes List our images with `docker images`, and check the size of: - the `ubuntu` base image, - the single-stage `hello` image, - the multi-stage `hellomultistage` image. We can achieve even smaller images if we use smaller base images. However, if we use common base images (e.g. if we standardize on `ubuntu`), these common images will be pulled only once per node, so they are virtually "free." .debug[[containers/Multi_Stage_Builds.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Multi_Stage_Builds.md)] --- ## Build targets * We can also tag an intermediary stage with the following command: ```bash docker build --target STAGE --tag NAME ``` * This will create an image (named `NAME`) corresponding to stage `STAGE` * This can be used to easily access an intermediary stage for inspection (instead of parsing the output of `docker build` to find out the image ID) * This can also be used to describe multiple images from a single Dockerfile (instead of using multiple Dockerfiles, which could go out of sync) ??? :EN:Optimizing our images and their build process :EN:- Leveraging multi-stage builds :FR:Optimiser les images et leur construction :FR:- Utilisation d'un *multi-stage build* .debug[[containers/Multi_Stage_Builds.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Multi_Stage_Builds.md)] --- class: pic .interstitial[] --- name: toc-exercise--writing-better-dockerfiles class: title Exercise — writing better Dockerfiles .nav[ [Previous part](#toc-multi-stage-builds) | [Back to table of contents](#toc-part-3) | [Next part](#toc-getting-inside-a-container) ] .debug[(automatically generated title slide)] --- # Exercise — writing better Dockerfiles Let's update our Dockerfiles to leverage multi-stage builds! The code is at: https://github.com/jpetazzo/wordsmith Use a different tag for these images, so that we can compare their sizes. What's the size difference between single-stage and multi-stage builds? .debug[[containers/Exercise_Dockerfile_Advanced.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Exercise_Dockerfile_Advanced.md)] --- class: pic .interstitial[] --- name: toc-getting-inside-a-container class: title Getting inside a container .nav[ [Previous part](#toc-exercise--writing-better-dockerfiles) | [Back to table of contents](#toc-part-3) | [Next part](#toc-restarting-and-attaching-to-containers) ] .debug[(automatically generated title slide)] --- class: title # Getting inside a container  .debug[[containers/Getting_Inside.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Getting_Inside.md)] --- ## Objectives On a traditional server or VM, we sometimes need to: * log into the machine (with SSH or on the console), * analyze the disks (by removing them or rebooting with a rescue system). In this chapter, we will see how to do that with containers. .debug[[containers/Getting_Inside.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Getting_Inside.md)] --- ## Getting a shell Every once in a while, we want to log into a machine. In an perfect world, this shouldn't be necessary. * You need to install or update packages (and their configuration)? Use configuration management. (e.g. Ansible, Chef, Puppet, Salt...) * You need to view logs and metrics? Collect and access them through a centralized platform. In the real world, though ... we often need shell access! .debug[[containers/Getting_Inside.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Getting_Inside.md)] --- ## Not getting a shell Even without a perfect deployment system, we can do many operations without getting a shell. * Installing packages can (and should) be done in the container image. * Configuration can be done at the image level, or when the container starts. * Dynamic configuration can be stored in a volume (shared with another container). * Logs written to stdout are automatically collected by the Docker Engine. * Other logs can be written to a shared volume. * Process information and metrics are visible from the host. _Let's save logging, volumes ... for later, but let's have a look at process information!_ .debug[[containers/Getting_Inside.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Getting_Inside.md)] --- ## Viewing container processes from the host If you run Docker on Linux, container processes are visible on the host. ```bash $ ps faux | less ``` * Scroll around the output of this command. * You should see the `jpetazzo/clock` container. * A containerized process is just like any other process on the host. * We can use tools like `lsof`, `strace`, `gdb` ... To analyze them. .debug[[containers/Getting_Inside.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Getting_Inside.md)] --- class: extra-details ## What's the difference between a container process and a host process? * Each process (containerized or not) belongs to *namespaces* and *cgroups*. * The namespaces and cgroups determine what a process can "see" and "do". * Analogy: each process (containerized or not) runs with a specific UID (user ID). * UID=0 is root, and has elevated privileges. Other UIDs are normal users. _We will give more details about namespaces and cgroups later._ .debug[[containers/Getting_Inside.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Getting_Inside.md)] --- ## Getting a shell in a running container * Sometimes, we need to get a shell anyway. * We _could_ run some SSH server in the container ... * But it is easier to use `docker exec`. ```bash $ docker exec -ti ticktock sh ``` * This creates a new process (running `sh`) _inside_ the container. * This can also be done "manually" with the tool `nsenter`. .debug[[containers/Getting_Inside.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Getting_Inside.md)] --- ## Caveats * The tool that you want to run needs to exist in the container. * Some tools (like `ip netns exec`) let you attach to _one_ namespace at a time. (This lets you e.g. setup network interfaces, even if you don't have `ifconfig` or `ip` in the container.) * Most importantly: the container needs to be running. * What if the container is stopped or crashed? .debug[[containers/Getting_Inside.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Getting_Inside.md)] --- ## Getting a shell in a stopped container * A stopped container is only _storage_ (like a disk drive). * We cannot SSH into a disk drive or USB stick! * We need to connect the disk to a running machine. * How does that translate into the container world? .debug[[containers/Getting_Inside.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Getting_Inside.md)] --- ## Analyzing a stopped container As an exercise, we are going to try to find out what's wrong with `jpetazzo/crashtest`. ```bash docker run jpetazzo/crashtest ``` The container starts, but then stops immediately, without any output. What would MacGyver™ do? First, let's check the status of that container. ```bash docker ps -l ``` .debug[[containers/Getting_Inside.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Getting_Inside.md)] --- ## Viewing filesystem changes * We can use `docker diff` to see files that were added / changed / removed. ```bash docker diff ``` * The container ID was shown by `docker ps -l`. * We can also see it with `docker ps -lq`. * The output of `docker diff` shows some interesting log files! .debug[[containers/Getting_Inside.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Getting_Inside.md)] --- ## Accessing files * We can extract files with `docker cp`. ```bash docker cp :/var/log/nginx/error.log . ``` * Then we can look at that log file. ```bash cat error.log ``` (The directory `/run/nginx` doesn't exist.) .debug[[containers/Getting_Inside.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Getting_Inside.md)] --- ## Exploring a crashed container * We can restart a container with `docker start` ... * ... But it will probably crash again immediately! * We cannot specify a different program to run with `docker start` * But we can create a new image from the crashed container ```bash docker commit debugimage ``` * Then we can run a new container from that image, with a custom entrypoint ```bash docker run -ti --entrypoint sh debugimage ``` .debug[[containers/Getting_Inside.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Getting_Inside.md)] --- class: extra-details ## Obtaining a complete dump * We can also dump the entire filesystem of a container. * This is done with `docker export`. * It generates a tar archive. ```bash docker export | tar tv ``` This will give a detailed listing of the content of the container. ??? :EN:- Troubleshooting and getting inside a container :FR:- Inspecter un conteneur en détail, en *live* ou *post-mortem* .debug[[containers/Getting_Inside.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Getting_Inside.md)] --- class: pic .interstitial[] --- name: toc-restarting-and-attaching-to-containers class: title Restarting and attaching to containers .nav[ [Previous part](#toc-getting-inside-a-container) | [Back to table of contents](#toc-part-3) | [Next part](#toc-naming-and-inspecting-containers) ] .debug[(automatically generated title slide)] --- # Restarting and attaching to containers We have started containers in the foreground, and in the background. In this chapter, we will see how to: * Put a container in the background. * Attach to a background container to bring it to the foreground. * Restart a stopped container. .debug[[containers/Start_And_Attach.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Start_And_Attach.md)] --- ## Background and foreground The distinction between foreground and background containers is arbitrary. From Docker's point of view, all containers are the same. All containers run the same way, whether there is a client attached to them or not. It is always possible to detach from a container, and to reattach to a container. Analogy: attaching to a container is like plugging a keyboard and screen to a physical server. .debug[[containers/Start_And_Attach.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Start_And_Attach.md)] --- ## Detaching from a container (Linux/macOS) * If you have started an *interactive* container (with option `-it`), you can detach from it. * The "detach" sequence is `^P^Q`. * Otherwise you can detach by killing the Docker client. (But not by hitting `^C`, as this would deliver `SIGINT` to the container.) What does `-it` stand for? * `-t` means "allocate a terminal." * `-i` means "connect stdin to the terminal." .debug[[containers/Start_And_Attach.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Start_And_Attach.md)] --- ## Detaching cont. (Win PowerShell and cmd.exe) * Docker for Windows has a different detach experience due to shell features. * `^P^Q` does not work. * `^C` will detach, rather than stop the container. * Using Bash, Subsystem for Linux, etc. on Windows behaves like Linux/macOS shells. * Both PowerShell and Bash work well in Win 10; just be aware of differences. .debug[[containers/Start_And_Attach.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Start_And_Attach.md)] --- class: extra-details ## Specifying a custom detach sequence * You don't like `^P^Q`? No problem! * You can change the sequence with `docker run --detach-keys`. * This can also be passed as a global option to the engine. Start a container with a custom detach command: ```bash $ docker run -ti --detach-keys ctrl-x,x jpetazzo/clock ``` Detach by hitting `^X x`. (This is ctrl-x then x, not ctrl-x twice!) Check that our container is still running: ```bash $ docker ps -l ``` .debug[[containers/Start_And_Attach.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Start_And_Attach.md)] --- class: extra-details ## Attaching to a container You can attach to a container: ```bash $ docker attach ``` * The container must be running. * There *can* be multiple clients attached to the same container. * If you don't specify `--detach-keys` when attaching, it defaults back to `^P^Q`. Try it on our previous container: ```bash $ docker attach $(docker ps -lq) ``` Check that `^X x` doesn't work, but `^P ^Q` does. .debug[[containers/Start_And_Attach.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Start_And_Attach.md)] --- ## Detaching from non-interactive containers * **Warning:** if the container was started without `-it`... * You won't be able to detach with `^P^Q`. * If you hit `^C`, the signal will be proxied to the container. * Remember: you can always detach by killing the Docker client. .debug[[containers/Start_And_Attach.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Start_And_Attach.md)] --- ## Checking container output * Use `docker attach` if you intend to send input to the container. * If you just want to see the output of a container, use `docker logs`. ```bash $ docker logs --tail 1 --follow ``` .debug[[containers/Start_And_Attach.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Start_And_Attach.md)] --- ## Restarting a container When a container has exited, it is in stopped state. It can then be restarted with the `start` command. ```bash $ docker start ``` The container will be restarted using the same options you launched it with. You can re-attach to it if you want to interact with it: ```bash $ docker attach ``` Use `docker ps -a` to identify the container ID of a previous `jpetazzo/clock` container, and try those commands. .debug[[containers/Start_And_Attach.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Start_And_Attach.md)] --- ## Attaching to a REPL * REPL = Read Eval Print Loop * Shells, interpreters, TUI ... * Symptom: you `docker attach`, and see nothing * The REPL doesn't know that you just attached, and doesn't print anything * Try hitting `^L` or `Enter` .debug[[containers/Start_And_Attach.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Start_And_Attach.md)] --- class: extra-details ## SIGWINCH * When you `docker attach`, the Docker Engine sends SIGWINCH signals to the container. * SIGWINCH = WINdow CHange; indicates a change in window size. * This will cause some CLI and TUI programs to redraw the screen. * But not all of them. ??? :EN:- Restarting old containers :EN:- Detaching and reattaching to container :FR:- Redémarrer des anciens conteneurs :FR:- Se détacher et rattacher à des conteneurs .debug[[containers/Start_And_Attach.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Start_And_Attach.md)] --- class: pic .interstitial[] --- name: toc-naming-and-inspecting-containers class: title Naming and inspecting containers .nav[ [Previous part](#toc-restarting-and-attaching-to-containers) | [Back to table of contents](#toc-part-3) | [Next part](#toc-labels) ] .debug[(automatically generated title slide)] --- class: title # Naming and inspecting containers  .debug[[containers/Naming_And_Inspecting.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Naming_And_Inspecting.md)] --- ## Objectives In this lesson, we will learn about an important Docker concept: container *naming*. Naming allows us to: * Reference easily a container. * Ensure unicity of a specific container. We will also see the `inspect` command, which gives a lot of details about a container. .debug[[containers/Naming_And_Inspecting.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Naming_And_Inspecting.md)] --- ## Naming our containers So far, we have referenced containers with their ID. We have copy-pasted the ID, or used a shortened prefix. But each container can also be referenced by its name. If a container is named `thumbnail-worker`, I can do: ```bash $ docker logs thumbnail-worker $ docker stop thumbnail-worker etc. ``` .debug[[containers/Naming_And_Inspecting.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Naming_And_Inspecting.md)] --- ## Default names When we create a container, if we don't give a specific name, Docker will pick one for us. It will be the concatenation of: * A mood (furious, goofy, suspicious, boring...) * The name of a famous inventor (tesla, darwin, wozniak...) Examples: `happy_curie`, `clever_hopper`, `jovial_lovelace` ... .debug[[containers/Naming_And_Inspecting.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Naming_And_Inspecting.md)] --- ## Specifying a name You can set the name of the container when you create it. ```bash $ docker run --name ticktock jpetazzo/clock ``` If you specify a name that already exists, Docker will refuse to create the container. This lets us enforce unicity of a given resource. .debug[[containers/Naming_And_Inspecting.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Naming_And_Inspecting.md)] --- ## Renaming containers * You can rename containers with `docker rename`. * This allows you to "free up" a name without destroying the associated container. .debug[[containers/Naming_And_Inspecting.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Naming_And_Inspecting.md)] --- ## Inspecting a container The `docker inspect` command will output a very detailed JSON map. ```bash $ docker inspect [{ ... (many pages of JSON here) ... ``` There are multiple ways to consume that information. .debug[[containers/Naming_And_Inspecting.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Naming_And_Inspecting.md)] --- ## Parsing JSON with the Shell * You *could* grep and cut or awk the output of `docker inspect`. * Please, don't. * It's painful. * If you really must parse JSON from the Shell, use JQ! (It's great.) ```bash $ docker inspect | jq . ``` * We will see a better solution which doesn't require extra tools. .debug[[containers/Naming_And_Inspecting.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Naming_And_Inspecting.md)] --- ## Using `--format` You can specify a format string, which will be parsed by Go's text/template package. ```bash $ docker inspect --format '{{ json .Created }}' "2015-02-24T07:21:11.712240394Z" ``` * The generic syntax is to wrap the expression with double curly braces. * The expression starts with a dot representing the JSON object. * Then each field or member can be accessed in dotted notation syntax. * The optional `json` keyword asks for valid JSON output. (e.g. here it adds the surrounding double-quotes.) ??? :EN:Managing container lifecycle :EN:- Naming and inspecting containers :FR:Suivre ses conteneurs à la loupe :FR:- Obtenir des informations détaillées sur un conteneur :FR:- Associer un identifiant unique à un conteneur .debug[[containers/Naming_And_Inspecting.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Naming_And_Inspecting.md)] --- class: pic .interstitial[] --- name: toc-labels class: title Labels .nav[ [Previous part](#toc-naming-and-inspecting-containers) | [Back to table of contents](#toc-part-3) | [Next part](#toc-advanced-dockerfile-syntax) ] .debug[(automatically generated title slide)] --- # Labels * Labels allow to attach arbitrary metadata to containers. * Labels are key/value pairs. * They are specified at container creation. * You can query them with `docker inspect`. * They can also be used as filters with some commands (e.g. `docker ps`). .debug[[containers/Labels.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Labels.md)] --- ## Using labels Let's create a few containers with a label `owner`. ```bash docker run -d -l owner=alice nginx docker run -d -l owner=bob nginx docker run -d -l owner nginx ``` We didn't specify a value for the `owner` label in the last example. This is equivalent to setting the value to be an empty string. .debug[[containers/Labels.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Labels.md)] --- ## Querying labels We can view the labels with `docker inspect`. ```bash $ docker inspect $(docker ps -lq) | grep -A3 Labels "Labels": { "maintainer": "NGINX Docker Maintainers ", "owner": "" }, ``` We can use the `--format` flag to list the value of a label. ```bash $ docker inspect $(docker ps -q) --format 'OWNER={{.Config.Labels.owner}}' ``` .debug[[containers/Labels.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Labels.md)] --- ## Using labels to select containers We can list containers having a specific label. ```bash $ docker ps --filter label=owner ``` Or we can list containers having a specific label with a specific value. ```bash $ docker ps --filter label=owner=alice ``` .debug[[containers/Labels.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Labels.md)] --- ## Use-cases for labels * HTTP vhost of a web app or web service. (The label is used to generate the configuration for NGINX, HAProxy, etc.) * Backup schedule for a stateful service. (The label is used by a cron job to determine if/when to backup container data.) * Service ownership. (To determine internal cross-billing, or who to page in case of outage.) * etc. ??? :EN:- Using labels to identify containers :FR:- Étiqueter ses conteneurs avec des méta-données .debug[[containers/Labels.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Labels.md)] --- class: pic .interstitial[] --- name: toc-advanced-dockerfile-syntax class: title Advanced Dockerfile Syntax .nav[ [Previous part](#toc-labels) | [Back to table of contents](#toc-part-3) | [Next part](#toc-container-network-drivers) ] .debug[(automatically generated title slide)] --- class: title # Advanced Dockerfile Syntax  .debug[[containers/Advanced_Dockerfiles.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Advanced_Dockerfiles.md)] --- ## Objectives We have seen simple Dockerfiles to illustrate how Docker build container images. In this section, we will give a recap of the Dockerfile syntax, and introduce advanced Dockerfile commands that we might come across sometimes; or that we might want to use in some specific scenarios. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Advanced_Dockerfiles.md)] --- ## `Dockerfile` usage summary * `Dockerfile` instructions are executed in order. * Each instruction creates a new layer in the image. * Docker maintains a cache with the layers of previous builds. * When there are no changes in the instructions and files making a layer, the builder re-uses the cached layer, without executing the instruction for that layer. * The `FROM` instruction MUST be the first non-comment instruction. * Lines starting with `#` are treated as comments. * Some instructions (like `CMD` or `ENTRYPOINT`) update a piece of metadata. (As a result, each call to these instructions makes the previous one useless.) .debug[[containers/Advanced_Dockerfiles.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Advanced_Dockerfiles.md)] --- ## The `RUN` instruction The `RUN` instruction can be specified in two ways. With shell wrapping, which runs the specified command inside a shell, with `/bin/sh -c`: ```dockerfile RUN apt-get update ``` Or using the `exec` method, which avoids shell string expansion, and allows execution in images that don't have `/bin/sh`: ```dockerfile RUN [ "apt-get", "update" ] ``` .debug[[containers/Advanced_Dockerfiles.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Advanced_Dockerfiles.md)] --- ## More about the `RUN` instruction `RUN` will do the following: * Execute a command. * Record changes made to the filesystem. * Work great to install libraries, packages, and various files. `RUN` will NOT do the following: * Record state of *processes*. * Automatically start daemons. If you want to start something automatically when the container runs, you should use `CMD` and/or `ENTRYPOINT`. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Advanced_Dockerfiles.md)] --- ## Collapsing layers It is possible to execute multiple commands in a single step: ```dockerfile RUN apt-get update && apt-get install -y wget && apt-get clean ``` It is also possible to break a command onto multiple lines: ```dockerfile RUN apt-get update \ && apt-get install -y wget \ && apt-get clean ``` .debug[[containers/Advanced_Dockerfiles.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Advanced_Dockerfiles.md)] --- ## The `EXPOSE` instruction The `EXPOSE` instruction tells Docker what ports are to be published in this image. ```dockerfile EXPOSE 8080 EXPOSE 80 443 EXPOSE 53/tcp 53/udp ``` * All ports are private by default. * Declaring a port with `EXPOSE` is not enough to make it public. * The `Dockerfile` doesn't control on which port a service gets exposed. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Advanced_Dockerfiles.md)] --- ## Exposing ports * When you `docker run -p ...`, that port becomes public. (Even if it was not declared with `EXPOSE`.) * When you `docker run -P ...` (without port number), all ports declared with `EXPOSE` become public. A *public port* is reachable from other containers and from outside the host. A *private port* is not reachable from outside. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Advanced_Dockerfiles.md)] --- ## The `COPY` instruction The `COPY` instruction adds files and content from your host into the image. ```dockerfile COPY . /src ``` This will add the contents of the *build context* (the directory passed as an argument to `docker build`) to the directory `/src` in the container. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Advanced_Dockerfiles.md)] --- ## Build context isolation Note: you can only reference files and directories *inside* the build context. Absolute paths are taken as being anchored to the build context, so the two following lines are equivalent: ```dockerfile COPY . /src COPY / /src ``` Attempts to use `..` to get out of the build context will be detected and blocked with Docker, and the build will fail. Otherwise, a `Dockerfile` could succeed on host A, but fail on host B. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Advanced_Dockerfiles.md)] --- ## `ADD` `ADD` works almost like `COPY`, but has a few extra features. `ADD` can get remote files: ```dockerfile ADD http://www.example.com/webapp.jar /opt/ ``` This would download the `webapp.jar` file and place it in the `/opt` directory. `ADD` will automatically unpack zip files and tar archives: ```dockerfile ADD ./assets.zip /var/www/htdocs/assets/ ``` This would unpack `assets.zip` into `/var/www/htdocs/assets`. *However,* `ADD` will not automatically unpack remote archives. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Advanced_Dockerfiles.md)] --- ## `ADD`, `COPY`, and the build cache * Before creating a new layer, Docker checks its build cache. * For most Dockerfile instructions, Docker only looks at the `Dockerfile` content to do the cache lookup. * For `ADD` and `COPY` instructions, Docker also checks if the files to be added to the container have been changed. * `ADD` always needs to download the remote file before it can check if it has been changed. (It cannot use, e.g., ETags or If-Modified-Since headers.) .debug[[containers/Advanced_Dockerfiles.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Advanced_Dockerfiles.md)] --- ## `VOLUME` The `VOLUME` instruction tells Docker that a specific directory should be a *volume*. ```dockerfile VOLUME /var/lib/mysql ``` Filesystem access in volumes bypasses the copy-on-write layer, offering native performance to I/O done in those directories. Volumes can be attached to multiple containers, allowing to "port" data over from a container to another, e.g. to upgrade a database to a newer version. It is possible to start a container in "read-only" mode. The container filesystem will be made read-only, but volumes can still have read/write access if necessary. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Advanced_Dockerfiles.md)] --- ## The `WORKDIR` instruction The `WORKDIR` instruction sets the working directory for subsequent instructions. It also affects `CMD` and `ENTRYPOINT`, since it sets the working directory used when starting the container. ```dockerfile WORKDIR /src ``` You can specify `WORKDIR` again to change the working directory for further operations. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Advanced_Dockerfiles.md)] --- ## The `ENV` instruction The `ENV` instruction specifies environment variables that should be set in any container launched from the image. ```dockerfile ENV WEBAPP_PORT 8080 ``` This will result in an environment variable being created in any containers created from this image of ```bash WEBAPP_PORT=8080 ``` You can also specify environment variables when you use `docker run`. ```bash $ docker run -e WEBAPP_PORT=8000 -e WEBAPP_HOST=www.example.com ... ``` .debug[[containers/Advanced_Dockerfiles.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Advanced_Dockerfiles.md)] --- ## The `USER` instruction The `USER` instruction sets the user name or UID to use when running the image. It can be used multiple times to change back to root or to another user. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Advanced_Dockerfiles.md)] --- ## The `CMD` instruction The `CMD` instruction is a default command run when a container is launched from the image. ```dockerfile CMD [ "nginx", "-g", "daemon off;" ] ``` Means we don't need to specify `nginx -g "daemon off;"` when running the container. Instead of: ```bash $ docker run /web_image nginx -g "daemon off;" ``` We can just do: ```bash $ docker run /web_image ``` .debug[[containers/Advanced_Dockerfiles.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Advanced_Dockerfiles.md)] --- ## More about the `CMD` instruction Just like `RUN`, the `CMD` instruction comes in two forms. The first executes in a shell: ```dockerfile CMD nginx -g "daemon off;" ``` The second executes directly, without shell processing: ```dockerfile CMD [ "nginx", "-g", "daemon off;" ] ``` .debug[[containers/Advanced_Dockerfiles.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Advanced_Dockerfiles.md)] --- class: extra-details ## Overriding the `CMD` instruction The `CMD` can be overridden when you run a container. ```bash $ docker run -it /web_image bash ``` Will run `bash` instead of `nginx -g "daemon off;"`. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Advanced_Dockerfiles.md)] --- ## The `ENTRYPOINT` instruction The `ENTRYPOINT` instruction is like the `CMD` instruction, but arguments given on the command line are *appended* to the entry point. Note: you have to use the "exec" syntax (`[ "..." ]`). ```dockerfile ENTRYPOINT [ "/bin/ls" ] ``` If we were to run: ```bash $ docker run training/ls -l ``` Instead of trying to run `-l`, the container will run `/bin/ls -l`. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Advanced_Dockerfiles.md)] --- class: extra-details ## Overriding the `ENTRYPOINT` instruction The entry point can be overridden as well. ```bash $ docker run -it training/ls bin dev home lib64 mnt proc run srv tmp var boot etc lib media opt root sbin sys usr $ docker run -it --entrypoint bash training/ls root@d902fb7b1fc7:/# ``` .debug[[containers/Advanced_Dockerfiles.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Advanced_Dockerfiles.md)] --- ## How `CMD` and `ENTRYPOINT` interact The `CMD` and `ENTRYPOINT` instructions work best when used together. ```dockerfile ENTRYPOINT [ "nginx" ] CMD [ "-g", "daemon off;" ] ``` The `ENTRYPOINT` specifies the command to be run and the `CMD` specifies its options. On the command line we can then potentially override the options when needed. ```bash $ docker run -d /web_image -t ``` This will override the options `CMD` provided with new flags. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Advanced_Dockerfiles.md)] --- ## Advanced Dockerfile instructions * `ONBUILD` lets you stash instructions that will be executed when this image is used as a base for another one. * `LABEL` adds arbitrary metadata to the image. * `ARG` defines build-time variables (optional or mandatory). * `STOPSIGNAL` sets the signal for `docker stop` (`TERM` by default). * `HEALTHCHECK` defines a command assessing the status of the container. * `SHELL` sets the default program to use for string-syntax RUN, CMD, etc. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Advanced_Dockerfiles.md)] --- class: extra-details ## The `ONBUILD` instruction The `ONBUILD` instruction is a trigger. It sets instructions that will be executed when another image is built from the image being build. This is useful for building images which will be used as a base to build other images. ```dockerfile ONBUILD COPY . /src ``` * You can't chain `ONBUILD` instructions with `ONBUILD`. * `ONBUILD` can't be used to trigger `FROM` instructions. ??? :EN:- Advanced Dockerfile syntax :FR:- Dockerfile niveau expert .debug[[containers/Advanced_Dockerfiles.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Advanced_Dockerfiles.md)] --- class: pic .interstitial[] --- name: toc-container-network-drivers class: title Container network drivers .nav[ [Previous part](#toc-advanced-dockerfile-syntax) | [Back to table of contents](#toc-part-3) | [Next part](#toc-) ] .debug[(automatically generated title slide)] --- # Container network drivers The Docker Engine supports different network drivers. The built-in drivers include: * `bridge` (default) * `null` (for the special network called `none`) * `host` (for the special network called `host`) * `container` (that one is a bit magic!) The network is selected with `docker run --net ...`. Each network is managed by a driver. The different drivers are explained with more details on the following slides. .debug[[containers/Network_Drivers.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Network_Drivers.md)] --- ## The default bridge * By default, the container gets a virtual `eth0` interface. (In addition to its own private `lo` loopback interface.) * That interface is provided by a `veth` pair. * It is connected to the Docker bridge. (Named `docker0` by default; configurable with `--bridge`.) * Addresses are allocated on a private, internal subnet. (Docker uses 172.17.0.0/16 by default; configurable with `--bip`.) * Outbound traffic goes through an iptables MASQUERADE rule. * Inbound traffic goes through an iptables DNAT rule. * The container can have its own routes, iptables rules, etc. .debug[[containers/Network_Drivers.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Network_Drivers.md)] --- ## The null driver * Container is started with `docker run --net none ...` * It only gets the `lo` loopback interface. No `eth0`. * It can't send or receive network traffic. * Useful for isolated/untrusted workloads. .debug[[containers/Network_Drivers.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Network_Drivers.md)] --- ## The host driver * Container is started with `docker run --net host ...` * It sees (and can access) the network interfaces of the host. * It can bind any address, any port (for ill and for good). * Network traffic doesn't have to go through NAT, bridge, or veth. * Performance = native! Use cases: * Performance sensitive applications (VOIP, gaming, streaming...) * Peer discovery (e.g. Erlang port mapper, Raft, Serf...) .debug[[containers/Network_Drivers.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Network_Drivers.md)] --- ## The container driver * Container is started with `docker run --net container:id ...` * It re-uses the network stack of another container. * It shares with this other container the same interfaces, IP address(es), routes, iptables rules, etc. * Those containers can communicate over their `lo` interface. (i.e. one can bind to 127.0.0.1 and the others can connect to it.) ??? :EN:Advanced container networking :EN:- Transparent network access with the "host" driver :EN:- Sharing is caring with the "container" driver :FR:Paramétrage réseau avancé :FR:- Accès transparent au réseau avec le mode "host" :FR:- Partage de la pile réseau avece le mode "container" .debug[[containers/Network_Drivers.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Network_Drivers.md)]
RUN CMD, EXPOSE ... ``` * The build fails as soon as an instruction fails * If `RUN ` fails, the build doesn't produce an image * If it succeeds, it produces a clean image (without test libraries and data) .debug[[containers/Dockerfile_Tips.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Dockerfile_Tips.md)] --- class: pic .interstitial[] --- name: toc-dockerfile-examples class: title Dockerfile examples .nav[ [Previous part](#toc-tips-for-efficient-dockerfiles) | [Back to table of contents](#toc-part-3) | [Next part](#toc-reducing-image-size) ] .debug[(automatically generated title slide)] --- # Dockerfile examples There are a number of tips, tricks, and techniques that we can use in Dockerfiles. But sometimes, we have to use different (and even opposed) practices depending on: - the complexity of our project, - the programming language or framework that we are using, - the stage of our project (early MVP vs. super-stable production), - whether we're building a final image or a base for further images, - etc. We are going to show a few examples using very different techniques. .debug[[containers/Dockerfile_Tips.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Dockerfile_Tips.md)] --- ## When to optimize an image When authoring official images, it is a good idea to reduce as much as possible: - the number of layers, - the size of the final image. This is often done at the expense of build time and convenience for the image maintainer; but when an image is downloaded millions of time, saving even a few seconds of pull time can be worth it. .small[ ```dockerfile RUN apt-get update && apt-get install -y libpng12-dev libjpeg-dev && rm -rf /var/lib/apt/lists/* \ && docker-php-ext-configure gd --with-png-dir=/usr --with-jpeg-dir=/usr \ && docker-php-ext-install gd ... RUN curl -o wordpress.tar.gz -SL https://wordpress.org/wordpress-${WORDPRESS_UPSTREAM_VERSION}.tar.gz \ && echo "$WORDPRESS_SHA1 *wordpress.tar.gz" | sha1sum -c - \ && tar -xzf wordpress.tar.gz -C /usr/src/ \ && rm wordpress.tar.gz \ && chown -R www-data:www-data /usr/src/wordpress ``` ] (Source: [Wordpress official image](https://github.com/docker-library/wordpress/blob/618490d4bdff6c5774b84b717979bfe3d6ba8ad1/apache/Dockerfile)) .debug[[containers/Dockerfile_Tips.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Dockerfile_Tips.md)] --- ## When to *not* optimize an image Sometimes, it is better to prioritize *maintainer convenience*. In particular, if: - the image changes a lot, - the image has very few users (e.g. only 1, the maintainer!), - the image is built and run on the same machine, - the image is built and run on machines with a very fast link ... In these cases, just keep things simple! (Next slide: a Dockerfile that can be used to preview a Jekyll / github pages site.) .debug[[containers/Dockerfile_Tips.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Dockerfile_Tips.md)] --- ```dockerfile FROM debian:sid RUN apt-get update -q RUN apt-get install -yq build-essential make RUN apt-get install -yq zlib1g-dev RUN apt-get install -yq ruby ruby-dev RUN apt-get install -yq python-pygments RUN apt-get install -yq nodejs RUN apt-get install -yq cmake RUN gem install --no-rdoc --no-ri github-pages COPY . /blog WORKDIR /blog VOLUME /blog/_site EXPOSE 4000 CMD ["jekyll", "serve", "--host", "0.0.0.0", "--incremental"] ``` .debug[[containers/Dockerfile_Tips.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Dockerfile_Tips.md)] --- ## Multi-dimensional versioning systems Images can have a tag, indicating the version of the image. But sometimes, there are multiple important components, and we need to indicate the versions for all of them. This can be done with environment variables: ```dockerfile ENV PIP=9.0.3 \ ZC_BUILDOUT=2.11.2 \ SETUPTOOLS=38.7.0 \ PLONE_MAJOR=5.1 \ PLONE_VERSION=5.1.0 \ PLONE_MD5=76dc6cfc1c749d763c32fff3a9870d8d ``` (Source: [Plone official image](https://github.com/plone/plone.docker/blob/master/5.1/5.1.0/alpine/Dockerfile)) .debug[[containers/Dockerfile_Tips.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Dockerfile_Tips.md)] --- ## Entrypoints and wrappers It is very common to define a custom entrypoint. That entrypoint will generally be a script, performing any combination of: - pre-flights checks (if a required dependency is not available, display a nice error message early instead of an obscure one in a deep log file), - generation or validation of configuration files, - dropping privileges (with e.g. `su` or `gosu`, sometimes combined with `chown`), - and more. .debug[[containers/Dockerfile_Tips.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Dockerfile_Tips.md)] --- ## A typical entrypoint script ```dockerfile #!/bin/sh set -e # first arg is '-f' or '--some-option' # or first arg is 'something.conf' if [ "${1#-}" != "$1" ] || [ "${1%.conf}" != "$1" ]; then set -- redis-server "$@" fi # allow the container to be started with '--user' if [ "$1" = 'redis-server' -a "$(id -u)" = '0' ]; then chown -R redis . exec su-exec redis "$0" "$@" fi exec "$@" ``` (Source: [Redis official image](https://github.com/docker-library/redis/blob/d24f2be82673ccef6957210cc985e392ebdc65e4/4.0/alpine/docker-entrypoint.sh)) .debug[[containers/Dockerfile_Tips.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Dockerfile_Tips.md)] --- ## Factoring information To facilitate maintenance (and avoid human errors), avoid to repeat information like: - version numbers, - remote asset URLs (e.g. source tarballs) ... Instead, use environment variables. .small[ ```dockerfile ENV NODE_VERSION 10.2.1 ... RUN ... && curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/node-v$NODE_VERSION.tar.xz" \ && curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/SHASUMS256.txt.asc" \ && gpg --batch --decrypt --output SHASUMS256.txt SHASUMS256.txt.asc \ && grep " node-v$NODE_VERSION.tar.xz\$" SHASUMS256.txt | sha256sum -c - \ && tar -xf "node-v$NODE_VERSION.tar.xz" \ && cd "node-v$NODE_VERSION" \ ... ``` ] (Source: [Nodejs official image](https://github.com/nodejs/docker-node/blob/master/10/alpine/Dockerfile)) .debug[[containers/Dockerfile_Tips.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Dockerfile_Tips.md)] --- ## Overrides In theory, development and production images should be the same. In practice, we often need to enable specific behaviors in development (e.g. debug statements). One way to reconcile both needs is to use Compose to enable these behaviors. Let's look at the [trainingwheels](https://github.com/bretfisher/trainingwheels) demo app for an example. .debug[[containers/Dockerfile_Tips.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Dockerfile_Tips.md)] --- ## Production image This Dockerfile builds an image leveraging gunicorn: ```dockerfile FROM python RUN pip install flask RUN pip install gunicorn RUN pip install redis COPY . /src WORKDIR /src CMD gunicorn --bind 0.0.0.0:5000 --workers 10 counter:app EXPOSE 5000 ``` (Source: [trainingwheels Dockerfile](https://github.com/bretfisher/trainingwheels/blob/master/www/Dockerfile)) .debug[[containers/Dockerfile_Tips.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Dockerfile_Tips.md)] --- ## Development Compose file This Compose file uses the same image, but with a few overrides for development: - the Flask development server is used (overriding `CMD`), - the `DEBUG` environment variable is set, - a volume is used to provide a faster local development workflow. .small[ ```yaml services: www: build: www ports: - 8000:5000 user: nobody environment: DEBUG: 1 command: python counter.py volumes: - ./www:/src ``` ] (Source: [trainingwheels Compose file](https://github.com/bretfisher/trainingwheels/blob/master/docker-compose.yml)) .debug[[containers/Dockerfile_Tips.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Dockerfile_Tips.md)] --- ## How to know which best practices are better? - The main goal of containers is to make our lives easier. - In this chapter, we showed many ways to write Dockerfiles. - These Dockerfiles use sometimes diametrically opposed techniques. - Yet, they were the "right" ones *for a specific situation.* - It's OK (and even encouraged) to start simple and evolve as needed. - Feel free to review this chapter later (after writing a few Dockerfiles) for inspiration! ??? :EN:Optimizing images :EN:- Dockerfile tips, tricks, and best practices :EN:- Reducing build time :EN:- Reducing image size :FR:Optimiser ses images :FR:- Bonnes pratiques, trucs et astuces :FR:- Réduire le temps de build :FR:- Réduire la taille des images .debug[[containers/Dockerfile_Tips.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Dockerfile_Tips.md)] --- class: pic .interstitial[] --- name: toc-reducing-image-size class: title Reducing image size .nav[ [Previous part](#toc-dockerfile-examples) | [Back to table of contents](#toc-part-3) | [Next part](#toc-multi-stage-builds) ] .debug[(automatically generated title slide)] --- # Reducing image size * In the previous example, our final image contained: * our `hello` program * its source code * the compiler * Only the first one is strictly necessary. * We are going to see how to obtain an image without the superfluous components. .debug[[containers/Multi_Stage_Builds.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Multi_Stage_Builds.md)] --- ## Can't we remove superfluous files with `RUN`? What happens if we do one of the following commands? - `RUN rm -rf ...` - `RUN apt-get remove ...` - `RUN make clean ...` -- This adds a layer which removes a bunch of files. But the previous layers (which added the files) still exist. .debug[[containers/Multi_Stage_Builds.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Multi_Stage_Builds.md)] --- ## Removing files with an extra layer When downloading an image, all the layers must be downloaded. | Dockerfile instruction | Layer size | Image size | | ---------------------- | ---------- | ---------- | | `FROM ubuntu` | Size of base image | Size of base image | | `...` | ... | Sum of this layer + all previous ones | | `RUN apt-get install somepackage` | Size of files added (e.g. a few MB) | Sum of this layer + all previous ones | | `...` | ... | Sum of this layer + all previous ones | | `RUN apt-get remove somepackage` | Almost zero (just metadata) | Same as previous one | Therefore, `RUN rm` does not reduce the size of the image or free up disk space. .debug[[containers/Multi_Stage_Builds.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Multi_Stage_Builds.md)] --- ## Removing unnecessary files Various techniques are available to obtain smaller images: - collapsing layers, - adding binaries that are built outside of the Dockerfile, - squashing the final image, - multi-stage builds. Let's review them quickly. .debug[[containers/Multi_Stage_Builds.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Multi_Stage_Builds.md)] --- ## Collapsing layers You will frequently see Dockerfiles like this: ```dockerfile FROM ubuntu RUN apt-get update && apt-get install xxx && ... && apt-get remove xxx && ... ``` Or the (more readable) variant: ```dockerfile FROM ubuntu RUN apt-get update \ && apt-get install xxx \ && ... \ && apt-get remove xxx \ && ... ``` This `RUN` command gives us a single layer. The files that are added, then removed in the same layer, do not grow the layer size. .debug[[containers/Multi_Stage_Builds.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Multi_Stage_Builds.md)] --- ## Collapsing layers: pros and cons Pros: - works on all versions of Docker - doesn't require extra tools Cons: - not very readable - some unnecessary files might still remain if the cleanup is not thorough - that layer is expensive (slow to build) .debug[[containers/Multi_Stage_Builds.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Multi_Stage_Builds.md)] --- ## Building binaries outside of the Dockerfile This results in a Dockerfile looking like this: ```dockerfile FROM ubuntu COPY xxx /usr/local/bin ``` Of course, this implies that the file `xxx` exists in the build context. That file has to exist before you can run `docker build`. For instance, it can: - exist in the code repository, - be created by another tool (script, Makefile...), - be created by another container image and extracted from the image. See for instance the [busybox official image](https://github.com/docker-library/busybox/blob/fe634680e32659aaf0ee0594805f74f332619a90/musl/Dockerfile) or this [older busybox image](https://github.com/jpetazzo/docker-busybox). .debug[[containers/Multi_Stage_Builds.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Multi_Stage_Builds.md)] --- ## Building binaries outside: pros and cons Pros: - final image can be very small Cons: - requires an extra build tool - we're back in dependency hell and "works on my machine" Cons, if binary is added to code repository: - breaks portability across different platforms - grows repository size a lot if the binary is updated frequently .debug[[containers/Multi_Stage_Builds.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Multi_Stage_Builds.md)] --- ## Squashing the final image The idea is to transform the final image into a single-layer image. This can be done in (at least) two ways. - Activate experimental features and squash the final image: ```bash docker image build --squash ... ``` - Export/import the final image. ```bash docker build -t temp-image . docker run --entrypoint true --name temp-container temp-image docker export temp-container | docker import - final-image docker rm temp-container docker rmi temp-image ``` .debug[[containers/Multi_Stage_Builds.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Multi_Stage_Builds.md)] --- ## Squashing the image: pros and cons Pros: - single-layer images are smaller and faster to download - removed files no longer take up storage and network resources Cons: - we still need to actively remove unnecessary files - squash operation can take a lot of time (on big images) - squash operation does not benefit from cache (even if we change just a tiny file, the whole image needs to be re-squashed) .debug[[containers/Multi_Stage_Builds.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Multi_Stage_Builds.md)] --- ## Multi-stage builds Multi-stage builds allow us to have multiple *stages*. Each stage is a separate image, and can copy files from previous stages. We're going to see how they work in more detail. .debug[[containers/Multi_Stage_Builds.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Multi_Stage_Builds.md)] --- class: pic .interstitial[] --- name: toc-multi-stage-builds class: title Multi-stage builds .nav[ [Previous part](#toc-reducing-image-size) | [Back to table of contents](#toc-part-3) | [Next part](#toc-exercise--writing-better-dockerfiles) ] .debug[(automatically generated title slide)] --- # Multi-stage builds * At any point in our `Dockerfile`, we can add a new `FROM` line. * This line starts a new stage of our build. * Each stage can access the files of the previous stages with `COPY --from=...`. * When a build is tagged (with `docker build -t ...`), the last stage is tagged. * Previous stages are not discarded: they will be used for caching, and can be referenced. .debug[[containers/Multi_Stage_Builds.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Multi_Stage_Builds.md)] --- ## Multi-stage builds in practice * Each stage is numbered, starting at `0` * We can copy a file from a previous stage by indicating its number, e.g.: ```dockerfile COPY --from=0 /file/from/first/stage /location/in/current/stage ``` * We can also name stages, and reference these names: ```dockerfile FROM golang AS builder RUN ... FROM alpine COPY --from=builder /go/bin/mylittlebinary /usr/local/bin/ ``` .debug[[containers/Multi_Stage_Builds.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Multi_Stage_Builds.md)] --- ## Multi-stage builds for our C program We will change our Dockerfile to: * give a nickname to the first stage: `compiler` * add a second stage using the same `ubuntu` base image * add the `hello` binary to the second stage * make sure that `CMD` is in the second stage The resulting Dockerfile is on the next slide. .debug[[containers/Multi_Stage_Builds.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Multi_Stage_Builds.md)] --- ## Multi-stage build `Dockerfile` Here is the final Dockerfile: ```dockerfile FROM ubuntu AS compiler RUN apt-get update RUN apt-get install -y build-essential COPY hello.c / RUN make hello FROM ubuntu COPY --from=compiler /hello /hello CMD /hello ``` Let's build it, and check that it works correctly: ```bash docker build -t hellomultistage . docker run hellomultistage ``` .debug[[containers/Multi_Stage_Builds.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Multi_Stage_Builds.md)] --- ## Comparing single/multi-stage build image sizes List our images with `docker images`, and check the size of: - the `ubuntu` base image, - the single-stage `hello` image, - the multi-stage `hellomultistage` image. We can achieve even smaller images if we use smaller base images. However, if we use common base images (e.g. if we standardize on `ubuntu`), these common images will be pulled only once per node, so they are virtually "free." .debug[[containers/Multi_Stage_Builds.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Multi_Stage_Builds.md)] --- ## Build targets * We can also tag an intermediary stage with the following command: ```bash docker build --target STAGE --tag NAME ``` * This will create an image (named `NAME`) corresponding to stage `STAGE` * This can be used to easily access an intermediary stage for inspection (instead of parsing the output of `docker build` to find out the image ID) * This can also be used to describe multiple images from a single Dockerfile (instead of using multiple Dockerfiles, which could go out of sync) ??? :EN:Optimizing our images and their build process :EN:- Leveraging multi-stage builds :FR:Optimiser les images et leur construction :FR:- Utilisation d'un *multi-stage build* .debug[[containers/Multi_Stage_Builds.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Multi_Stage_Builds.md)] --- class: pic .interstitial[] --- name: toc-exercise--writing-better-dockerfiles class: title Exercise — writing better Dockerfiles .nav[ [Previous part](#toc-multi-stage-builds) | [Back to table of contents](#toc-part-3) | [Next part](#toc-getting-inside-a-container) ] .debug[(automatically generated title slide)] --- # Exercise — writing better Dockerfiles Let's update our Dockerfiles to leverage multi-stage builds! The code is at: https://github.com/jpetazzo/wordsmith Use a different tag for these images, so that we can compare their sizes. What's the size difference between single-stage and multi-stage builds? .debug[[containers/Exercise_Dockerfile_Advanced.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Exercise_Dockerfile_Advanced.md)] --- class: pic .interstitial[] --- name: toc-getting-inside-a-container class: title Getting inside a container .nav[ [Previous part](#toc-exercise--writing-better-dockerfiles) | [Back to table of contents](#toc-part-3) | [Next part](#toc-restarting-and-attaching-to-containers) ] .debug[(automatically generated title slide)] --- class: title # Getting inside a container  .debug[[containers/Getting_Inside.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Getting_Inside.md)] --- ## Objectives On a traditional server or VM, we sometimes need to: * log into the machine (with SSH or on the console), * analyze the disks (by removing them or rebooting with a rescue system). In this chapter, we will see how to do that with containers. .debug[[containers/Getting_Inside.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Getting_Inside.md)] --- ## Getting a shell Every once in a while, we want to log into a machine. In an perfect world, this shouldn't be necessary. * You need to install or update packages (and their configuration)? Use configuration management. (e.g. Ansible, Chef, Puppet, Salt...) * You need to view logs and metrics? Collect and access them through a centralized platform. In the real world, though ... we often need shell access! .debug[[containers/Getting_Inside.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Getting_Inside.md)] --- ## Not getting a shell Even without a perfect deployment system, we can do many operations without getting a shell. * Installing packages can (and should) be done in the container image. * Configuration can be done at the image level, or when the container starts. * Dynamic configuration can be stored in a volume (shared with another container). * Logs written to stdout are automatically collected by the Docker Engine. * Other logs can be written to a shared volume. * Process information and metrics are visible from the host. _Let's save logging, volumes ... for later, but let's have a look at process information!_ .debug[[containers/Getting_Inside.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Getting_Inside.md)] --- ## Viewing container processes from the host If you run Docker on Linux, container processes are visible on the host. ```bash $ ps faux | less ``` * Scroll around the output of this command. * You should see the `jpetazzo/clock` container. * A containerized process is just like any other process on the host. * We can use tools like `lsof`, `strace`, `gdb` ... To analyze them. .debug[[containers/Getting_Inside.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Getting_Inside.md)] --- class: extra-details ## What's the difference between a container process and a host process? * Each process (containerized or not) belongs to *namespaces* and *cgroups*. * The namespaces and cgroups determine what a process can "see" and "do". * Analogy: each process (containerized or not) runs with a specific UID (user ID). * UID=0 is root, and has elevated privileges. Other UIDs are normal users. _We will give more details about namespaces and cgroups later._ .debug[[containers/Getting_Inside.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Getting_Inside.md)] --- ## Getting a shell in a running container * Sometimes, we need to get a shell anyway. * We _could_ run some SSH server in the container ... * But it is easier to use `docker exec`. ```bash $ docker exec -ti ticktock sh ``` * This creates a new process (running `sh`) _inside_ the container. * This can also be done "manually" with the tool `nsenter`. .debug[[containers/Getting_Inside.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Getting_Inside.md)] --- ## Caveats * The tool that you want to run needs to exist in the container. * Some tools (like `ip netns exec`) let you attach to _one_ namespace at a time. (This lets you e.g. setup network interfaces, even if you don't have `ifconfig` or `ip` in the container.) * Most importantly: the container needs to be running. * What if the container is stopped or crashed? .debug[[containers/Getting_Inside.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Getting_Inside.md)] --- ## Getting a shell in a stopped container * A stopped container is only _storage_ (like a disk drive). * We cannot SSH into a disk drive or USB stick! * We need to connect the disk to a running machine. * How does that translate into the container world? .debug[[containers/Getting_Inside.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Getting_Inside.md)] --- ## Analyzing a stopped container As an exercise, we are going to try to find out what's wrong with `jpetazzo/crashtest`. ```bash docker run jpetazzo/crashtest ``` The container starts, but then stops immediately, without any output. What would MacGyver™ do? First, let's check the status of that container. ```bash docker ps -l ``` .debug[[containers/Getting_Inside.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Getting_Inside.md)] --- ## Viewing filesystem changes * We can use `docker diff` to see files that were added / changed / removed. ```bash docker diff ``` * The container ID was shown by `docker ps -l`. * We can also see it with `docker ps -lq`. * The output of `docker diff` shows some interesting log files! .debug[[containers/Getting_Inside.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Getting_Inside.md)] --- ## Accessing files * We can extract files with `docker cp`. ```bash docker cp :/var/log/nginx/error.log . ``` * Then we can look at that log file. ```bash cat error.log ``` (The directory `/run/nginx` doesn't exist.) .debug[[containers/Getting_Inside.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Getting_Inside.md)] --- ## Exploring a crashed container * We can restart a container with `docker start` ... * ... But it will probably crash again immediately! * We cannot specify a different program to run with `docker start` * But we can create a new image from the crashed container ```bash docker commit debugimage ``` * Then we can run a new container from that image, with a custom entrypoint ```bash docker run -ti --entrypoint sh debugimage ``` .debug[[containers/Getting_Inside.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Getting_Inside.md)] --- class: extra-details ## Obtaining a complete dump * We can also dump the entire filesystem of a container. * This is done with `docker export`. * It generates a tar archive. ```bash docker export | tar tv ``` This will give a detailed listing of the content of the container. ??? :EN:- Troubleshooting and getting inside a container :FR:- Inspecter un conteneur en détail, en *live* ou *post-mortem* .debug[[containers/Getting_Inside.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Getting_Inside.md)] --- class: pic .interstitial[] --- name: toc-restarting-and-attaching-to-containers class: title Restarting and attaching to containers .nav[ [Previous part](#toc-getting-inside-a-container) | [Back to table of contents](#toc-part-3) | [Next part](#toc-naming-and-inspecting-containers) ] .debug[(automatically generated title slide)] --- # Restarting and attaching to containers We have started containers in the foreground, and in the background. In this chapter, we will see how to: * Put a container in the background. * Attach to a background container to bring it to the foreground. * Restart a stopped container. .debug[[containers/Start_And_Attach.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Start_And_Attach.md)] --- ## Background and foreground The distinction between foreground and background containers is arbitrary. From Docker's point of view, all containers are the same. All containers run the same way, whether there is a client attached to them or not. It is always possible to detach from a container, and to reattach to a container. Analogy: attaching to a container is like plugging a keyboard and screen to a physical server. .debug[[containers/Start_And_Attach.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Start_And_Attach.md)] --- ## Detaching from a container (Linux/macOS) * If you have started an *interactive* container (with option `-it`), you can detach from it. * The "detach" sequence is `^P^Q`. * Otherwise you can detach by killing the Docker client. (But not by hitting `^C`, as this would deliver `SIGINT` to the container.) What does `-it` stand for? * `-t` means "allocate a terminal." * `-i` means "connect stdin to the terminal." .debug[[containers/Start_And_Attach.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Start_And_Attach.md)] --- ## Detaching cont. (Win PowerShell and cmd.exe) * Docker for Windows has a different detach experience due to shell features. * `^P^Q` does not work. * `^C` will detach, rather than stop the container. * Using Bash, Subsystem for Linux, etc. on Windows behaves like Linux/macOS shells. * Both PowerShell and Bash work well in Win 10; just be aware of differences. .debug[[containers/Start_And_Attach.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Start_And_Attach.md)] --- class: extra-details ## Specifying a custom detach sequence * You don't like `^P^Q`? No problem! * You can change the sequence with `docker run --detach-keys`. * This can also be passed as a global option to the engine. Start a container with a custom detach command: ```bash $ docker run -ti --detach-keys ctrl-x,x jpetazzo/clock ``` Detach by hitting `^X x`. (This is ctrl-x then x, not ctrl-x twice!) Check that our container is still running: ```bash $ docker ps -l ``` .debug[[containers/Start_And_Attach.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Start_And_Attach.md)] --- class: extra-details ## Attaching to a container You can attach to a container: ```bash $ docker attach ``` * The container must be running. * There *can* be multiple clients attached to the same container. * If you don't specify `--detach-keys` when attaching, it defaults back to `^P^Q`. Try it on our previous container: ```bash $ docker attach $(docker ps -lq) ``` Check that `^X x` doesn't work, but `^P ^Q` does. .debug[[containers/Start_And_Attach.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Start_And_Attach.md)] --- ## Detaching from non-interactive containers * **Warning:** if the container was started without `-it`... * You won't be able to detach with `^P^Q`. * If you hit `^C`, the signal will be proxied to the container. * Remember: you can always detach by killing the Docker client. .debug[[containers/Start_And_Attach.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Start_And_Attach.md)] --- ## Checking container output * Use `docker attach` if you intend to send input to the container. * If you just want to see the output of a container, use `docker logs`. ```bash $ docker logs --tail 1 --follow ``` .debug[[containers/Start_And_Attach.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Start_And_Attach.md)] --- ## Restarting a container When a container has exited, it is in stopped state. It can then be restarted with the `start` command. ```bash $ docker start ``` The container will be restarted using the same options you launched it with. You can re-attach to it if you want to interact with it: ```bash $ docker attach ``` Use `docker ps -a` to identify the container ID of a previous `jpetazzo/clock` container, and try those commands. .debug[[containers/Start_And_Attach.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Start_And_Attach.md)] --- ## Attaching to a REPL * REPL = Read Eval Print Loop * Shells, interpreters, TUI ... * Symptom: you `docker attach`, and see nothing * The REPL doesn't know that you just attached, and doesn't print anything * Try hitting `^L` or `Enter` .debug[[containers/Start_And_Attach.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Start_And_Attach.md)] --- class: extra-details ## SIGWINCH * When you `docker attach`, the Docker Engine sends SIGWINCH signals to the container. * SIGWINCH = WINdow CHange; indicates a change in window size. * This will cause some CLI and TUI programs to redraw the screen. * But not all of them. ??? :EN:- Restarting old containers :EN:- Detaching and reattaching to container :FR:- Redémarrer des anciens conteneurs :FR:- Se détacher et rattacher à des conteneurs .debug[[containers/Start_And_Attach.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Start_And_Attach.md)] --- class: pic .interstitial[] --- name: toc-naming-and-inspecting-containers class: title Naming and inspecting containers .nav[ [Previous part](#toc-restarting-and-attaching-to-containers) | [Back to table of contents](#toc-part-3) | [Next part](#toc-labels) ] .debug[(automatically generated title slide)] --- class: title # Naming and inspecting containers  .debug[[containers/Naming_And_Inspecting.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Naming_And_Inspecting.md)] --- ## Objectives In this lesson, we will learn about an important Docker concept: container *naming*. Naming allows us to: * Reference easily a container. * Ensure unicity of a specific container. We will also see the `inspect` command, which gives a lot of details about a container. .debug[[containers/Naming_And_Inspecting.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Naming_And_Inspecting.md)] --- ## Naming our containers So far, we have referenced containers with their ID. We have copy-pasted the ID, or used a shortened prefix. But each container can also be referenced by its name. If a container is named `thumbnail-worker`, I can do: ```bash $ docker logs thumbnail-worker $ docker stop thumbnail-worker etc. ``` .debug[[containers/Naming_And_Inspecting.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Naming_And_Inspecting.md)] --- ## Default names When we create a container, if we don't give a specific name, Docker will pick one for us. It will be the concatenation of: * A mood (furious, goofy, suspicious, boring...) * The name of a famous inventor (tesla, darwin, wozniak...) Examples: `happy_curie`, `clever_hopper`, `jovial_lovelace` ... .debug[[containers/Naming_And_Inspecting.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Naming_And_Inspecting.md)] --- ## Specifying a name You can set the name of the container when you create it. ```bash $ docker run --name ticktock jpetazzo/clock ``` If you specify a name that already exists, Docker will refuse to create the container. This lets us enforce unicity of a given resource. .debug[[containers/Naming_And_Inspecting.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Naming_And_Inspecting.md)] --- ## Renaming containers * You can rename containers with `docker rename`. * This allows you to "free up" a name without destroying the associated container. .debug[[containers/Naming_And_Inspecting.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Naming_And_Inspecting.md)] --- ## Inspecting a container The `docker inspect` command will output a very detailed JSON map. ```bash $ docker inspect [{ ... (many pages of JSON here) ... ``` There are multiple ways to consume that information. .debug[[containers/Naming_And_Inspecting.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Naming_And_Inspecting.md)] --- ## Parsing JSON with the Shell * You *could* grep and cut or awk the output of `docker inspect`. * Please, don't. * It's painful. * If you really must parse JSON from the Shell, use JQ! (It's great.) ```bash $ docker inspect | jq . ``` * We will see a better solution which doesn't require extra tools. .debug[[containers/Naming_And_Inspecting.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Naming_And_Inspecting.md)] --- ## Using `--format` You can specify a format string, which will be parsed by Go's text/template package. ```bash $ docker inspect --format '{{ json .Created }}' "2015-02-24T07:21:11.712240394Z" ``` * The generic syntax is to wrap the expression with double curly braces. * The expression starts with a dot representing the JSON object. * Then each field or member can be accessed in dotted notation syntax. * The optional `json` keyword asks for valid JSON output. (e.g. here it adds the surrounding double-quotes.) ??? :EN:Managing container lifecycle :EN:- Naming and inspecting containers :FR:Suivre ses conteneurs à la loupe :FR:- Obtenir des informations détaillées sur un conteneur :FR:- Associer un identifiant unique à un conteneur .debug[[containers/Naming_And_Inspecting.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Naming_And_Inspecting.md)] --- class: pic .interstitial[] --- name: toc-labels class: title Labels .nav[ [Previous part](#toc-naming-and-inspecting-containers) | [Back to table of contents](#toc-part-3) | [Next part](#toc-advanced-dockerfile-syntax) ] .debug[(automatically generated title slide)] --- # Labels * Labels allow to attach arbitrary metadata to containers. * Labels are key/value pairs. * They are specified at container creation. * You can query them with `docker inspect`. * They can also be used as filters with some commands (e.g. `docker ps`). .debug[[containers/Labels.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Labels.md)] --- ## Using labels Let's create a few containers with a label `owner`. ```bash docker run -d -l owner=alice nginx docker run -d -l owner=bob nginx docker run -d -l owner nginx ``` We didn't specify a value for the `owner` label in the last example. This is equivalent to setting the value to be an empty string. .debug[[containers/Labels.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Labels.md)] --- ## Querying labels We can view the labels with `docker inspect`. ```bash $ docker inspect $(docker ps -lq) | grep -A3 Labels "Labels": { "maintainer": "NGINX Docker Maintainers ", "owner": "" }, ``` We can use the `--format` flag to list the value of a label. ```bash $ docker inspect $(docker ps -q) --format 'OWNER={{.Config.Labels.owner}}' ``` .debug[[containers/Labels.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Labels.md)] --- ## Using labels to select containers We can list containers having a specific label. ```bash $ docker ps --filter label=owner ``` Or we can list containers having a specific label with a specific value. ```bash $ docker ps --filter label=owner=alice ``` .debug[[containers/Labels.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Labels.md)] --- ## Use-cases for labels * HTTP vhost of a web app or web service. (The label is used to generate the configuration for NGINX, HAProxy, etc.) * Backup schedule for a stateful service. (The label is used by a cron job to determine if/when to backup container data.) * Service ownership. (To determine internal cross-billing, or who to page in case of outage.) * etc. ??? :EN:- Using labels to identify containers :FR:- Étiqueter ses conteneurs avec des méta-données .debug[[containers/Labels.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Labels.md)] --- class: pic .interstitial[] --- name: toc-advanced-dockerfile-syntax class: title Advanced Dockerfile Syntax .nav[ [Previous part](#toc-labels) | [Back to table of contents](#toc-part-3) | [Next part](#toc-container-network-drivers) ] .debug[(automatically generated title slide)] --- class: title # Advanced Dockerfile Syntax  .debug[[containers/Advanced_Dockerfiles.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Advanced_Dockerfiles.md)] --- ## Objectives We have seen simple Dockerfiles to illustrate how Docker build container images. In this section, we will give a recap of the Dockerfile syntax, and introduce advanced Dockerfile commands that we might come across sometimes; or that we might want to use in some specific scenarios. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Advanced_Dockerfiles.md)] --- ## `Dockerfile` usage summary * `Dockerfile` instructions are executed in order. * Each instruction creates a new layer in the image. * Docker maintains a cache with the layers of previous builds. * When there are no changes in the instructions and files making a layer, the builder re-uses the cached layer, without executing the instruction for that layer. * The `FROM` instruction MUST be the first non-comment instruction. * Lines starting with `#` are treated as comments. * Some instructions (like `CMD` or `ENTRYPOINT`) update a piece of metadata. (As a result, each call to these instructions makes the previous one useless.) .debug[[containers/Advanced_Dockerfiles.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Advanced_Dockerfiles.md)] --- ## The `RUN` instruction The `RUN` instruction can be specified in two ways. With shell wrapping, which runs the specified command inside a shell, with `/bin/sh -c`: ```dockerfile RUN apt-get update ``` Or using the `exec` method, which avoids shell string expansion, and allows execution in images that don't have `/bin/sh`: ```dockerfile RUN [ "apt-get", "update" ] ``` .debug[[containers/Advanced_Dockerfiles.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Advanced_Dockerfiles.md)] --- ## More about the `RUN` instruction `RUN` will do the following: * Execute a command. * Record changes made to the filesystem. * Work great to install libraries, packages, and various files. `RUN` will NOT do the following: * Record state of *processes*. * Automatically start daemons. If you want to start something automatically when the container runs, you should use `CMD` and/or `ENTRYPOINT`. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Advanced_Dockerfiles.md)] --- ## Collapsing layers It is possible to execute multiple commands in a single step: ```dockerfile RUN apt-get update && apt-get install -y wget && apt-get clean ``` It is also possible to break a command onto multiple lines: ```dockerfile RUN apt-get update \ && apt-get install -y wget \ && apt-get clean ``` .debug[[containers/Advanced_Dockerfiles.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Advanced_Dockerfiles.md)] --- ## The `EXPOSE` instruction The `EXPOSE` instruction tells Docker what ports are to be published in this image. ```dockerfile EXPOSE 8080 EXPOSE 80 443 EXPOSE 53/tcp 53/udp ``` * All ports are private by default. * Declaring a port with `EXPOSE` is not enough to make it public. * The `Dockerfile` doesn't control on which port a service gets exposed. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Advanced_Dockerfiles.md)] --- ## Exposing ports * When you `docker run -p ...`, that port becomes public. (Even if it was not declared with `EXPOSE`.) * When you `docker run -P ...` (without port number), all ports declared with `EXPOSE` become public. A *public port* is reachable from other containers and from outside the host. A *private port* is not reachable from outside. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Advanced_Dockerfiles.md)] --- ## The `COPY` instruction The `COPY` instruction adds files and content from your host into the image. ```dockerfile COPY . /src ``` This will add the contents of the *build context* (the directory passed as an argument to `docker build`) to the directory `/src` in the container. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Advanced_Dockerfiles.md)] --- ## Build context isolation Note: you can only reference files and directories *inside* the build context. Absolute paths are taken as being anchored to the build context, so the two following lines are equivalent: ```dockerfile COPY . /src COPY / /src ``` Attempts to use `..` to get out of the build context will be detected and blocked with Docker, and the build will fail. Otherwise, a `Dockerfile` could succeed on host A, but fail on host B. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Advanced_Dockerfiles.md)] --- ## `ADD` `ADD` works almost like `COPY`, but has a few extra features. `ADD` can get remote files: ```dockerfile ADD http://www.example.com/webapp.jar /opt/ ``` This would download the `webapp.jar` file and place it in the `/opt` directory. `ADD` will automatically unpack zip files and tar archives: ```dockerfile ADD ./assets.zip /var/www/htdocs/assets/ ``` This would unpack `assets.zip` into `/var/www/htdocs/assets`. *However,* `ADD` will not automatically unpack remote archives. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Advanced_Dockerfiles.md)] --- ## `ADD`, `COPY`, and the build cache * Before creating a new layer, Docker checks its build cache. * For most Dockerfile instructions, Docker only looks at the `Dockerfile` content to do the cache lookup. * For `ADD` and `COPY` instructions, Docker also checks if the files to be added to the container have been changed. * `ADD` always needs to download the remote file before it can check if it has been changed. (It cannot use, e.g., ETags or If-Modified-Since headers.) .debug[[containers/Advanced_Dockerfiles.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Advanced_Dockerfiles.md)] --- ## `VOLUME` The `VOLUME` instruction tells Docker that a specific directory should be a *volume*. ```dockerfile VOLUME /var/lib/mysql ``` Filesystem access in volumes bypasses the copy-on-write layer, offering native performance to I/O done in those directories. Volumes can be attached to multiple containers, allowing to "port" data over from a container to another, e.g. to upgrade a database to a newer version. It is possible to start a container in "read-only" mode. The container filesystem will be made read-only, but volumes can still have read/write access if necessary. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Advanced_Dockerfiles.md)] --- ## The `WORKDIR` instruction The `WORKDIR` instruction sets the working directory for subsequent instructions. It also affects `CMD` and `ENTRYPOINT`, since it sets the working directory used when starting the container. ```dockerfile WORKDIR /src ``` You can specify `WORKDIR` again to change the working directory for further operations. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Advanced_Dockerfiles.md)] --- ## The `ENV` instruction The `ENV` instruction specifies environment variables that should be set in any container launched from the image. ```dockerfile ENV WEBAPP_PORT 8080 ``` This will result in an environment variable being created in any containers created from this image of ```bash WEBAPP_PORT=8080 ``` You can also specify environment variables when you use `docker run`. ```bash $ docker run -e WEBAPP_PORT=8000 -e WEBAPP_HOST=www.example.com ... ``` .debug[[containers/Advanced_Dockerfiles.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Advanced_Dockerfiles.md)] --- ## The `USER` instruction The `USER` instruction sets the user name or UID to use when running the image. It can be used multiple times to change back to root or to another user. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Advanced_Dockerfiles.md)] --- ## The `CMD` instruction The `CMD` instruction is a default command run when a container is launched from the image. ```dockerfile CMD [ "nginx", "-g", "daemon off;" ] ``` Means we don't need to specify `nginx -g "daemon off;"` when running the container. Instead of: ```bash $ docker run /web_image nginx -g "daemon off;" ``` We can just do: ```bash $ docker run /web_image ``` .debug[[containers/Advanced_Dockerfiles.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Advanced_Dockerfiles.md)] --- ## More about the `CMD` instruction Just like `RUN`, the `CMD` instruction comes in two forms. The first executes in a shell: ```dockerfile CMD nginx -g "daemon off;" ``` The second executes directly, without shell processing: ```dockerfile CMD [ "nginx", "-g", "daemon off;" ] ``` .debug[[containers/Advanced_Dockerfiles.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Advanced_Dockerfiles.md)] --- class: extra-details ## Overriding the `CMD` instruction The `CMD` can be overridden when you run a container. ```bash $ docker run -it /web_image bash ``` Will run `bash` instead of `nginx -g "daemon off;"`. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Advanced_Dockerfiles.md)] --- ## The `ENTRYPOINT` instruction The `ENTRYPOINT` instruction is like the `CMD` instruction, but arguments given on the command line are *appended* to the entry point. Note: you have to use the "exec" syntax (`[ "..." ]`). ```dockerfile ENTRYPOINT [ "/bin/ls" ] ``` If we were to run: ```bash $ docker run training/ls -l ``` Instead of trying to run `-l`, the container will run `/bin/ls -l`. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Advanced_Dockerfiles.md)] --- class: extra-details ## Overriding the `ENTRYPOINT` instruction The entry point can be overridden as well. ```bash $ docker run -it training/ls bin dev home lib64 mnt proc run srv tmp var boot etc lib media opt root sbin sys usr $ docker run -it --entrypoint bash training/ls root@d902fb7b1fc7:/# ``` .debug[[containers/Advanced_Dockerfiles.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Advanced_Dockerfiles.md)] --- ## How `CMD` and `ENTRYPOINT` interact The `CMD` and `ENTRYPOINT` instructions work best when used together. ```dockerfile ENTRYPOINT [ "nginx" ] CMD [ "-g", "daemon off;" ] ``` The `ENTRYPOINT` specifies the command to be run and the `CMD` specifies its options. On the command line we can then potentially override the options when needed. ```bash $ docker run -d /web_image -t ``` This will override the options `CMD` provided with new flags. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Advanced_Dockerfiles.md)] --- ## Advanced Dockerfile instructions * `ONBUILD` lets you stash instructions that will be executed when this image is used as a base for another one. * `LABEL` adds arbitrary metadata to the image. * `ARG` defines build-time variables (optional or mandatory). * `STOPSIGNAL` sets the signal for `docker stop` (`TERM` by default). * `HEALTHCHECK` defines a command assessing the status of the container. * `SHELL` sets the default program to use for string-syntax RUN, CMD, etc. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Advanced_Dockerfiles.md)] --- class: extra-details ## The `ONBUILD` instruction The `ONBUILD` instruction is a trigger. It sets instructions that will be executed when another image is built from the image being build. This is useful for building images which will be used as a base to build other images. ```dockerfile ONBUILD COPY . /src ``` * You can't chain `ONBUILD` instructions with `ONBUILD`. * `ONBUILD` can't be used to trigger `FROM` instructions. ??? :EN:- Advanced Dockerfile syntax :FR:- Dockerfile niveau expert .debug[[containers/Advanced_Dockerfiles.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Advanced_Dockerfiles.md)] --- class: pic .interstitial[] --- name: toc-container-network-drivers class: title Container network drivers .nav[ [Previous part](#toc-advanced-dockerfile-syntax) | [Back to table of contents](#toc-part-3) | [Next part](#toc-) ] .debug[(automatically generated title slide)] --- # Container network drivers The Docker Engine supports different network drivers. The built-in drivers include: * `bridge` (default) * `null` (for the special network called `none`) * `host` (for the special network called `host`) * `container` (that one is a bit magic!) The network is selected with `docker run --net ...`. Each network is managed by a driver. The different drivers are explained with more details on the following slides. .debug[[containers/Network_Drivers.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Network_Drivers.md)] --- ## The default bridge * By default, the container gets a virtual `eth0` interface. (In addition to its own private `lo` loopback interface.) * That interface is provided by a `veth` pair. * It is connected to the Docker bridge. (Named `docker0` by default; configurable with `--bridge`.) * Addresses are allocated on a private, internal subnet. (Docker uses 172.17.0.0/16 by default; configurable with `--bip`.) * Outbound traffic goes through an iptables MASQUERADE rule. * Inbound traffic goes through an iptables DNAT rule. * The container can have its own routes, iptables rules, etc. .debug[[containers/Network_Drivers.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Network_Drivers.md)] --- ## The null driver * Container is started with `docker run --net none ...` * It only gets the `lo` loopback interface. No `eth0`. * It can't send or receive network traffic. * Useful for isolated/untrusted workloads. .debug[[containers/Network_Drivers.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Network_Drivers.md)] --- ## The host driver * Container is started with `docker run --net host ...` * It sees (and can access) the network interfaces of the host. * It can bind any address, any port (for ill and for good). * Network traffic doesn't have to go through NAT, bridge, or veth. * Performance = native! Use cases: * Performance sensitive applications (VOIP, gaming, streaming...) * Peer discovery (e.g. Erlang port mapper, Raft, Serf...) .debug[[containers/Network_Drivers.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Network_Drivers.md)] --- ## The container driver * Container is started with `docker run --net container:id ...` * It re-uses the network stack of another container. * It shares with this other container the same interfaces, IP address(es), routes, iptables rules, etc. * Those containers can communicate over their `lo` interface. (i.e. one can bind to 127.0.0.1 and the others can connect to it.) ??? :EN:Advanced container networking :EN:- Transparent network access with the "host" driver :EN:- Sharing is caring with the "container" driver :FR:Paramétrage réseau avancé :FR:- Accès transparent au réseau avec le mode "host" :FR:- Partage de la pile réseau avece le mode "container" .debug[[containers/Network_Drivers.md](https://github.com/BretFisher/container.training/tree/tampa/slides/containers/Network_Drivers.md)]