Installing programs is something most people take for granted. How could it be easier – you simply download an installation file, run it, answer a few prompts, and before you know it you have a fresh new application ready to go. This is fine for a single-user system like a laptop or desktop, but what happens when you want to share that program with someone else, or migrate it – along with its configuration and settings – to a different computer? What if you wanted to do a clean reinstall without having to hunt for scattered or leftover files? Better yet, what if you could run the application in a completely self-contained environment without it affecting your main system? Docker provides a unique way of accomplishing this, and the technology behind it is quickly gaining traction. Before we begin with Docker, it’s helpful to know how Docker relates to a similar technology known as virtualization.
The World of Virtual Machines
A virtual machine (VM) is essentially a computer running within a computer. With virtualization, your operating system (OS) hosts an isolated environment where another operating systems runs. Imagine a digital matryoshka doll, where the bigger doll (the host OS) contains one or more smaller dolls (the guest OSes). Where VMs were once limited to powerful commercial hardware, modern computers are more than capable of hosting several virtual environments using programs such as VirtualBox or VMWare. VMs can be paused, backed up, reverted to previous configurations, or transferred to other computers. However, the biggest benefit of VMs is also their biggest drawback: since the environment is completely self-contained, each VM has to manage a completely separate copy of the guest OS. Four Ubuntu VMs use four separate copies of Ubuntu with their own pre-allocated resources. resulting in unnecessary overhead and limited performance compared to native hardware.
Rather than use a VM, you can run your applications in a way that uses the native functionality of your host OS while still providing isolation. These applications – often called jails or containers – manage their own files, their own users, and their own dependencies, but take advantage of the resources available through the host OS. This lets you run your applications with little to no overhead, while still keeping them self-contained.
Docker is a software suite that automates the creation, deployment, and management of containers. Building on existing technologies in the Linux kernel such as cgroups and namespaces, Docker essentially uses application templates to build and run containers. Docker containers have a mostly isolated view of the host operating system. They maintain their own file system, their own processes, their own users, and their own network interfaces. Quotas can be used to restrict the resource usage and access rights of a container, but containers can also gain access to additional host resources, such as reading and writing to a directory on the host. With Docker, applications become their own mini-VM.
Getting Started with Docker
To get started with Docker, you can try the 10-minute tutorial available on Docker’s website. Alternatively, you can go completely virtual and install Docker in a Linux-based VM. This post uses Ubuntu 14.04, so you may need to adapt certain commands to your particular distribution. After installing Docker and starting the Docker service, you can access Docker’s commands using the following command:
$ sudo docker
Note that Docker commands must be executed as root. Because of this, it’s possible to give Docker containers access to sensitive parts of the host operating system, including the root directory. Control should only be granted to trusted users.
Your First Docker Image
The fastest way to start using Docker is to use a pre-built image. The Docker Hub Registry hosts repositories of images that can be searched and downloaded from the command line. For example, to find a Docker image for Ubuntu, simply type:
$ sudo docker search ubuntu
You’ll see a list of images organized by name, description, rank, and whether the image is officially supported. To download the Ubuntu image, use Docker’s pull command:
$ sudo docker pull ubuntu
This starts the process of downloading the Ubuntu image, along with its associated files. The result is a newly created image, which can be reviewed using the images command:
$ sudo docker images
- The repository that the image originates from.
- A tag that identifies the image.
- A unique identifier for the image.
You can reference an image using a combination of repository and tag, or by the image ID. As a shortcut, you can reference an image’s ID using the first three characters. For instance, either one of the following commands will display data about our Ubuntu image:
$ sudo docker inspect ubuntu $ sudo docker inspect fa8
Docker stores images in layers, with multiple layers combining to form a single unified file system. This allows Docker to remain lightweight while manage multiple images, especially when compared to virtual machines.
Your First Docker Container
Now for the exciting part: using your newly created image to create an Ubuntu container. Docker’s run command takes a minimum of two parameters: the image to run, and the command to run. Docker containers run for the lifetime of the command: if the command finishes, then the Docker container closes. In this example, we’ll run our Ubuntu image with an interactive bash shell:
$ sudo docker run -ti ubuntu /bin/bash
If all goes well, you should be presented with a root command prompt running in a completely isolated environment. You can install software, create files, and run programs. When you’re ready to leave, press Ctrl-C or type exit into the command prompt. Note that a container is only active as long as it has a running process. If you pass a command to execute a daemon or start a service, the container will immediately exit. You can review active Docker containers using ‘docker ps’, and you can review all containers using ‘docker ps -a’. You’ll notice your containers are given their own unique ID, along with a unique name such as trusting_wright, hopeful_yonath, or (and this is no joke), dreamy_wozniak. You can, of course, specify a custom name using ‘–name=<name>’. There are ways to run multiple commands and services in a Docker container, but for now we’ll focus on running a single command at a time.
Committing Changes to a Docker Container
When you’re ready to immortalize your container as an image, you can use ‘docker commit’ to store the container as an image. Pass the container ID and the repository name/tag of your choice. In this case, we’ll use the ubuntu repository, but we’ll tag the image as “modified”:
$ sudo docker commit trusting_wright ubuntu:modified $ sudo docker images
Go, Container, Go!
Our new container doesn’t do us much good if we have to leave a shell open while it’s running. Luckily, we can tell Docker to run our container as a background process using the -d flag. This runs the command on the container before detaching and pushing it to the background. For example, let’s run our Ubuntu container in the background by having it echo “hello world” to the console (see the Docker User Guide for more info):
$ sudo docker run -d ubuntu:modified /bin/bash -c "while true; do echo hello world; sleep 1; done"
We can review the output from an active Docker container using Docker’s logs command:
$ sudo docker logs <container ID>
As you can see, our Ubuntu container is echoing away in the background. We can control it using Docker’s pause, unpause, start, stop, and restart commands. We can also attach to the running container using docker attach, execute a command using docker exec, or review the changes between the container and its base image using docker diff. You can find more information on controlling Docker containers in the Docker Guide.
Building a Docker Image
Now that you know how to work with images, it’s time to build your own. Rather than create a container, run commands, then commit to a new image, you can use a Dockerfile A Dockerfile is a series of commands used to automatically generate a new image. In this example, we’ll create a Dockerfile that builds and runs a LAMP server using Ubuntu:
FROM ubuntu:trusty MAINTAINER Docker User <email@example.com> RUN apt-get update && apt-get -y install lamp-server^ VOLUME /var/www/html EXPOSE 80 CMD /usr/sbin/apachectl -D FOREGROUND
- FROM specifies an image that the new image is based on. In this case, we’re building a Trusty Tahr image from the “ubuntu” repository.
- MAINTAINER specifies the current maintainer of the Docker image.
- RUN specifies a command that Docker will run on the base image. In this case, we’re updating the apt package list and installing the lamp-server meta package, which installs Apache, MySQL, and PHP. The -y flag tells apt to automatically accept any prompts, since any prompts during the process will cause the build to fail. You can, of course, have multiple RUN statements in a single Dockerfile.
- VOLUME is used to make a file or directory inside the container available outside the container. In this case, we’ll map the web directory to a local directory so we can edit our website outside of the container. This also makes directories persistent, preventing you from losing their contents when the container is destroyed. You can specify multiple volumes in the form of ‘ [ “volume1”, “volume2” ] ‘.More information can be found in the Docker user guide.
- EXPOSE lets the container listen for incoming on a specific network port. We can map the port to a port on the local system and route incoming connections to the container.
- CMD specifies the command that will run when the container is started. In this case, we’re running Apache as a foreground process. Note that, unless otherwise specified using sudo or su, any command executed here will run as the root user.
We can build our new image by using the docker build command. docker build will search the current directory as well as subdirectories for a Dockerfile. We can specify a name for the new image using the -t flag.
Running the New Image
Running the LAMP image is slightly different than running the Ubuntu image. When using docker run, we need to map the web directory to a local directory, and map port 80 to a local port. We’ll map the directory to ~/apache-container and use port 4000 to access the website. Since we specified a command when building the image, we can simply use -d to run the container in the background.
sudo docker run -p 4000:80 -v ~/apache-container:/var/www -d ubuntu
More Fun With Docker
This guide only covers the bare minimum of what you can do with Docker. You can link containers together, use multiple containers for a single application, cluster Docker hosts, upload custom images to the public Docker Hub, or put your private server to work and create your own image repository. You can start Dockerizing your own applications and services, or expand on Dockerfiles created by others. Remember, if you download a public Dockerfile, be sure to review it before using it to build an image. You can learn more about Docker security through Docker’s website.
The safest way to learn Docker is with a virtual machine. Keep the Docker user guide open in the background as you try new commands. You can remove old images and containers by using docker rm <container ID> and docker rmi <image ID>. Just make sure the containers are stopped before you attempt to remove them. Don’t be afraid to fail, and have fun!
The Docker User Guide (Docker.com)
Getting Started with Docker (Servers for Hackers)
How to Install and Use Docker: Getting Started (DigitalOcean)