Reading Time: 7 minutes | Published: 2023-09-19 | Last Edited: 2023-09-19
This is a blog post version of a talk I presented at both Ubuntu Summit 2022 and SouthEast LinuxFest 2023. The first was not recorded, but the second was and is on SELF’s PeerTube instance. I apologise for the terrible audio, but there’s unfortunately nothing I can do about that. If you’re already intimately familiar with the core concepts of VMs or containers, I would suggest skipping those respective sections. If you’re vaguely familiar with either, I would recommend reading them because I do go a little bit in-depth.
Questions, comments, and corrections are welcome! Feel free to use the self-hosted comment system at the bottom, send me an email, an IM, reply to the fediverse post, etc. Edits and corrections, if there are any, will be noted just below this paragraph.
The benefits of VMs and containers ¶
- Isolation: you don’t want to allow an attacker to infiltrate your email server through your web application; the two should be completely separate from each other and VMs/containers provide strong isolation guarantees.
- Flexibility: VMs and containers only use the resources they’ve been given. If you tell the VM it has 200 MBs of RAM, it’s going to make do with 200 MBs of RAM and the kernel’s OOM killer is going to have a fun time 🤠
- Portability: once set up and configured, VMs and containers can mostly be treated as closed boxes; as long as the surrounding environment of the new host is similar to the previous in terms of communication (proxies, web servers, etc.), they can just be picked up and dropped between various hosts as necessary.
- Density: applications are usually much lighter than the systems they’re running on, so it makes sense to run many applications on one system. VMs and containers facilitate that without sacrificing security.
- Cleanliness: VMs and containers are applications in black boxes. When you’re done with the box, you can just throw it away and most everything related to the application is gone.
Virtual machines ¶
As the name suggests, Virtual Machines are all virtual; a hypervisor creates virtual disks for storage, virtual CPUs, virtual NICs, virtual RAM, etc. On top of the virtualised hardware, you have your kernel. This is what facilitates communication between the operating system and the (virtual) hardware. Above that is the operating system and all your applications.
At this point, the stack is quite large; VMs aren’t exactly lightweight, and this impacts how densely you can pack the host.
I mentioned a “hypervisor” a minute ago. I’ve explained what hypervisors in general do, but there are actually two different kinds of hypervisor. They’re creatively named Type 1 and Type 2.
Type 1 hypervisors ¶
These run directly in the host kernel without an intermediary OS. A good example would be KVM, a VM hypervisor than runs in the Kernel. Type 1 hypervisors can communicate directly with the host’s hardware to allocate RAM, issue instructions to the CPU, etc.
Type 2 hypervisors ¶
These run in userspace as an application, like VirtualBox. Type 2 hypervisors have to first go through the operating system, adding an additional layer to the stack.
VMs use virtualisation to achieve isolation. Containers use namespaces and cgroups, technologies pioneered in the Linux kernel. By now, though, there are equivalents for Windows and possibly other platforms.
Linux namespaces partition kernel resources like process IDs, hostnames, user IDs, directory hierarchies, network access, etc. This prevents one collection of processes from seeing or gaining access to data regarding another collection of processes.
Cgroups limit, track, and isolate the hardware resource use of a collection of processes. If you tell a cgroup that it’s only allowed to spawn 500 child processes and someone executes a fork bomb, the fork bomb will expand until it hits that limit. The kernel will prevent it from spawning further children and you’ll have to resolve the issue the same way you would with VMs: delete and re-create it, restore from a good backup, etc. You can also limit CPU use, the number of CPU cores it can access, RAM, disk use, and so on.
Application containers ¶
The most well-known example of application container tech is probably Docker. The goal here is to run a single application as minimally as possible inside each container. In the case of a single, statically-linked Go binary, a minimal Docker container might contain nothing more than the binary. If it’s a Python application, you’re more likely to use an Alpine Linux image and add your Python dependencies on top of that. If a database is required, that goes in a separate container. If you’ve got a web server to handle TLS termination and proxy your application, that’s a third container. One cohesive system might require many Docker containers to function as intended.
System containers ¶
One of the most well-known examples of system container tech is the subject of this post: LXD! Rather than containing a single application or a very small set of them, system containers are designed to house entire operating systems, like Debian or Rocky Linux, along with everything required for your application. Using our examples from above, a single statically-linked Go binary might run in a full Debian container, just like the Python application might. The database and webserver might go in that same container.
You treat each container more like you would a VM, but you get the performance benefit of not virtualising everything. Containers tend to be much lighter than most VMs.1
When to use which ¶
These are personal opinions. Please evaluate each technology and determine for yourself whether it’s a suitable fit for your environment.
As far as I’m aware, VMs are your only option when you want to work with esoteric hardware or hardware you don’t physically have on-hand. You can tell your VM that it’s running with RAM that’s 20 years old, a still-in-development RISC-V CPU, and a 420p monitor. That’s not possible with containers. VMs are also your only option when you want to work with foreign operating systems: running Linux on Windows, Windows on Linux, or OpenBSD on a Mac all require virtualisation. Another reason to stick with VMs is for compliance purposes. Containers are still very new and some regulatory bodies require virtualisation because it’s a decades-old and battle-tested isolation technique.
Application containers ¶
Application containers are particularly popular for microservices and reproducible builds, though I personally think NixOS is a better fit for the latter. App containers are also your only option if you want to use cloud platforms with extreme scaling capabilities like Google Cloud’s App Engine standard environment or AWS’s Fargate.
Application containers also tend to be necessary when the application you want to self-host is only distributed as a Docker image and the maintainers adamantly refuse to support any other deployment method. This is a massive pet peeve of mine; yes, Docker can make running self-hosted applications easier for inexperienced individuals,2 but an application orchestration system does not fit in every single environment. By refusing to provide proper “manual” deployment instructions, maintainers of these projects alienate an entire class of potential users and it pisses me off.
Just document your shit.
System containers ¶
Personally, I prefer the workflow of system containers and use them for
everything else. Because they contain entire operating systems, you’re able to
interact with it in a similar way to VMs or even your PC; you shell into it,
apt install whatever you need, set up the application, expose it over the
network (for example, on
0.0.0.0:8080), proxy it on the container host, and
that’s it! This process can be trivially automated with shell scripts, Ansible
roles, Chef, Puppet, whatever you like. Back the system up using tarsnap or
rsync.net or Backblaze, Google Drive, and restic. If you use
ZFS for your LXD storage pool, maybe go with syncoid and sanoid.
My point is that using system containers doesn’t mean throwing out the last few decades of systems knowledge and wisdom.
I wrote a follow-up post with a crash course to actually using LXD in the real world along with a few configuration tips.