Containerization is a pretty neat not-quite-new concept that’s getting some much needed exposure thanks to the success of Docker.

###What is Containerization

Containers are a type of virtualization where you don’t try to emulate any hardware. It’s a sort of operating system sandbox where you make the assumption that your running kernel can satisfy all the user-space requests of the container-bound programs. This is why containerization is also often called OS Virtualization: you are virtualizing your kernel-mode to the container’s running programs.

Containers are also usually joined up with a concept called filesystem snapshotting, whereby a filesystem’s state (time stamp, meta data, and binary content of every file) is captured as a series of deltas from one moment to the next. Docker uses snapshotting to build reliable operating system images to run in its containers.

###Getting started with Docker

There are tutorials online at docker.com for turning things on, but they don’t quite cover the high-level why and how of each thing you might want to do. I also wrote a tip here to help getting a few things setup. What I intend to cover here is a particular means of using docker successfully to generate a development environment. In this article, we’ll build a container that runs my local Developer User Group’s website step-by-step and publish it online for other people to use and maybe even contribute to.

Maybe not-so-obviously, there will be quite a few linux-isms. I’d suggest at least a passing familiarity with Nginx, Tomcat, Maven, and Java.

###First things first: Install docker

You should probably have already done this before you started reading this article, but if not, we’ll cover that first. Ubuntu repositories come with an available Docker package under the docker.io name. I haven’t had luck with this, so I won’t give any details on how to use it. We’ll go ahead and stick with upstream packages from the official Docker repository.

Per docs (and a few other things):

root:/# apt-get update
root:/# apt-get install apt-transport-https ca-certificates
root:/# apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D
root:/# add-apt-repository "deb https://apt.dockerproject.org/repo ubuntu-xenial main"
root:/$ apt update
root:/# apt install docker-engine pip
root:/# pip install docker-compose

and all the important pieces should be installed.

While we’re at it, add your user to the docker group so you can run docker commands without sudo

root:/# usermod -a -G docker chris

Log out, log in, and $ groups should return the word docker in it.

###Create a base image There’s actually no need to virtualize the entire operating system and all it’s configured files and programs to do what you need. It’s just easier to reason about the system we’re building since usually you want to build an environment that is as close as possible to your production deployment. There are images available on the different docker repositories around, but if you’re doing something useful you probably won’t be using one of those. So let’s start by building a base image of Ubuntu 16.04 (Xenial). This should give us a familiar platform by which we can start building our tool set (I’m assuming you’re running xenial or some other ubuntu version, since that’s really the only sane thing to do).

root:/# cd /tmp
root:/# debootstrap xenial xenial
root:/# tar -C xenial/ -c . | docker import - xenial
root:/# docker run xenial cat /etc/lsb_release

This should generate a brand spanking new docker container with a barebones Ubuntu 16.04 OS image. You can trash the /tmp/xenial folder afterwards, since we’re done with the chroot.

Docker has two different storage concepts, a point that took me a while to figure out when it was first thrust upon me.

Images are references to a particular chain of snapshots that make up an eventual filesystem state.

Containers are references to a particular running process tree with respect to an image and any attached volumes and container links.

You run an image to produce a container, you commit a container to produce an image. This is important because the same container process can be run on different images to produce different results, and the state of a container can be used to diverge images into new specific software stacks and associated configurations.

###Start the Container

Now that we have a base image, let’s turn things on. Remember, we run an image to produce a container, so:

chris:~$ docker run -i -t xenial bash

Earlier in our docker import command, we tagged the imported image as xenial. We use this nomenclature again to run bash on top of that base software. A note, docker run will default to system DNS. You can modify that with the --dns= flag.

Once we’ve executed the above command, you should see something similar to

root@0f41434aec2e:/#

This puts us in a root shell (bash) at the root directory (/). Let’s go ahead and try to update and upgrade our new container, and commit a new image.

Inside the container:

root@0f41434aec2e:/# apt update
root@0f41434aec2e:/# apt upgrade
root@0f41434aec2e:/# exit

Once outside:

chris:~$ docker commit romantic_wescoff xenial

docker commit accepts a container name and a tag name. We could re-tag the container into a new image name as well, which I’ll show in a later post. What we’ve done is take the apt changes and applied them into a new snapshot and chained that to the original xenial image tag.

You may wonder, “Where does romantic_wescoff come from?” to which the answer is: unless otherwise specified, Docker generates a unique name from a list of random adjectives and nouns and assigns that to the container. You can use the --name flag to specify what you would like to reference the container as. This name is a simpler reference than the SHA tag that is also generated for you. You’ll see this name used in docker ps, docker rm, docker attach, docker exec, and many other commands. It is good practice to assign a name to your docker images should you intend on using docker commands directly. In a later tutorial, we’ll cover how docker-compose behaves with names and what that can do for us.

As we’ve seen, getting started with a barebones system is pretty straightforward. In many ways, installation is the hardest part of using Docker, and once you understand the distinctions between images and containers, everything becomes pretty obvious.

In Part 2, I’ll cover installing all the software to run the servers and databases