Robin Laurén

Exploring my way out of Imposter Syndrome

Dabbling With Docker

For me, Docker’s been this black box that people say is the best thing ever and is so easy to get into and just package everything with Docker and things’ll just work. Yeah. Right. I don’t trust that at all.

So i’ve decided to learn something new. I’ve decided to demystify Docker.

After watching a few videos on Youtube

… i decided that there needs to be a more structured way. So i found what’s evidently considered the definitive book on Docker, Docker Deep Dive by Nigel Poulton (bundled with the Kubernetes Book for a mere USD16).

Here’s what i got so far. I’m still not even half way through the book.

Docker (as in “person who works at the docks, shuffling stuff on and off ships”) is a way to run applications in a standardised, portable and repeatable manner. You define your applications in recipes called Dockerfiles, then build “images” out of these recipes. The images can be compared to virtual machine templates but they aren’t the apps themselves. Images are instantiated into running containers, preferably one app per container. As a modern app will require several bits and pieces, you’d typically package (or contain :) each application into a separate container. Using the Dockerfile (or, as things get more orchestrated, docker-compose files), you can run an app on any Linux system. Or Windows system. Or, with some restriction, any Mac system.

The Dockerfile describes what application software goes on the image, and by extension, into the container. Let’s say, i want a web server on my image. Or the Python interpreter, so i can write a web app myself (well, technically at least). I would then describe which folders on my computer would be visible or copied into the container so that there actually is an application to run, and which ports would be visible on either side of the container, and that’s basically it. Build the image, run the container. Get bored, tear it down and leave no trace on your computer. Easy peasy. They say.

Now the Docker dogma says Docker containers are not virtual machines. With “proper” VM technology, you create a thing that to the thing itself is indistinguishable from an actual computer. Docker does not do this, they say. With Docker, you don’t create new machines and new operating systems upon them. But the thing is that each Docker container will be built with a chain of dependencies. You can’t just go and build a Python image. The Python image will depend on some other image which eventually will depend on a Linux image and even though that image can be really small (like five megabytes, which is what’s considered really small these days), it’s still a Linux. Or if you’re building a Windows container, you’ll include some nano-Windows, which certainly is smaller than what you’d install on a server, but it’s still a distinctly nontrivial amount of operating system.

The difference, they say, is that this isn’t machine virtualisation. It’s operating system (OS) virtualisation. Semantics, schemantics, i say. Pah! It’s still virtualisation until i know considerably better. Which i hope i do eventually.

Can i get my hands dirty now?

So how do you get started? First, get Docker. If you’re on a Mac or on Windows, there’s a nice and nifty installer package. If you’re on Linux, the recommended way is (cough) to run code from the web into your shell. The Windows and Mac versions will include just enough Linux to actually run Linux containers (and in the former case, Windows containers). Linux will of course already include enough Linux to run Linux containers, and absolutely no Windows containers.

If you’re on Linux, you’ll also need to add your user account to the docker user group. The Mac and Windows installers will do that for you with some admin credentials.

There are tons of Dockerfiles around with which you can do almost anything. You can run the worthless yet satisfying Hello, World example by typing

$ docker pull hello-world
$ docker run hello-world

If you want a web server, you can type

$ docker pull nginx
$ docker run --expose='80' -d --name hello_httpd nginx # -d = daemon/detach
$ curl http://localhost
$ docker container stop hello_httpd
$ docker container rm hello_httpd

Optionally, docker image rm nginx when you’re done.

If you’d like a Linux box on your Linux box (or on your Mac)

$ docker pull ubuntu
$ docker run -it --name pere ubuntu # -i = interactive, -t = terminal
# uname -a
# exit
$ docker rm pere

Again, docker rmi ubuntu when you’re done (rmi is shorthand for “remove image”).

Make it useful

Okay, so having a web server or a Linux on your box isn’t really that exciting. To make them useful, you’d really have to configure them somehow. The typical way to do so is with a Dockerfile. Now since this is “infrastructure by code”, you should create a new directory, preferably run git init on it and then create a file Dockerfile (capital D) within.

FROM nginx
COPY html /usr/share/nginx/html

Docker commands are usually written in CAPITAL LETTERS.

Also,

$ mkdir html
$ vim html/index.html
<html><head><title>Hello, World!</title></head>
<body><h1>Salve Orbis Terrarum!</h1></body></html>
esc :x

Now build the image and run the container

$ docker build -t my_nginx .
$ docker run -d --name hello_nginx -p 8080:80 my_nginx
$ curl http://localhost:8080
$ docker stop hello_nginx
$ docker rm hello_nginx
$ docker rmi my_nginx

Demystification dabble done!

And with that, i’ve shown at least myself that Docker isn’t dark magic after all. OK, so it’s kinda magical, but not intensely dark, and the fact that you can open up Dockerfiles and see what’s inside means that they’re not black boxes either.

Next time, i’ll write about composing several containers to create a slightly more complicated constellation. But only slightly.

Baby steps, honey. Baby steps.