I might have been kind of dumb, but one thing I always wondered about back when I first started as a system admin was why virtualization was such a big deal. My thought was, "Individual hardware nodes fail, so what was the point of creating many virtual servers which could all fail simultaneously if that system went down?" The answer turned out to be pretty obvious, but as a new system admin I didn't appreciate the upsides virtualization offered. All I could think of was how much more impactful a hardware failure would be. It took some time and some more real world experience but I realized that virtualization allows you to properly isolate concerns.

Isolation

Its always been a best practice in OOP to have one class concerned with one specific task and only that task, but do that task well. This is really a great rule for organzing servers as well. Because, when you have a server that is trying to fulfill five or ten different functions its really not easy to keep it all clean and organized. Your process will be competing for resources, you'll have many more potential sources of problems to look at if that server's load spikes, your file system will probably be a mess, and it just adds much more to the mental complexity of your architecture.

Virtualize it!

Enter virtualization. This is where traditional virtualization with KVM, Xen, and others really provided a good solution to what is essentially an organizational problem. You could now properly silo off the functions of your servers and start to implement a architecture where systems have a single concern. Which also allows you to more efficiently utilize your hardware. You know that file server you have sitting around with 16 cores and 128GB of RAM and a constant load level of 0.00? If you virtualize that system you can have it running a file server, a redis instance, nginx, and maybe a couple other functions at the same time. Simple and clean, with a high level of isolation. Now if your redis instance suddenly spikes in memory usage you'll know instantly what the culprit is, while having the reassurance that a spike in traffic didn't affect your file server's performance.

No! Dockerize it!

But what if you just have a few simple processes to isolate and don't need to run a full OS for them to function? Maybe a simple web app or a database server. Really, if you think about it, most of your processes won't need a seperate OS with different copies of crond, upstart/systemd, syslog, and its own kernel. What if there was a way to run just a single application with the benefits of the virtualization container, but without the overhead of running an entire OS on top of it? This is what Docker allows you to do! It uses a feature of the linux kernel called LinuX Containers (LXC) to provide this super light-weight virtualization.

LXC means you can run single processes in an isolated container and control the resources allocated to them. But wait! There's more! Docker even gives you containerized file systems and dynamic host mapping! A containerized filesystem means you can have multiple instances of the same process running in a Docker container but each one having a separate file system. Then, if your processes ever need to talk to each other Docker allows you to create network aliases that are then exposed to each other container running on that server. So your wordpress container never needs to actually know the address of it's mysql server, you can just have it connect to the mysql host alias and Docker will handle the host mapping in the background!

Where docker really shines is when you begin to think of how that containerized file system affects your deployments. Instead of having to test against multiple different copies of system libraries – because, when you have a test environment of any reasonable size keeping all your glibc's and miscellaneous language specific depenencies at the same version is painful – you build your docker container once, run it through your testing procedure and then ship it to production. That means what was run on your test environment is guaranteed to be the same as what is now running in production. If you've ever dealt with that one little bug that is never reproducible in the QA environment but somehow shows up in production, you'll know how wonderful that can be.

Hopefully that explains why there is so much buzz about the Docker project. It allows us to get extremely modular in our design by efficiently chopping up our physical server resources into very small logical parts, with almost no virtualization overhead.