One of the most interesting presentations at DockerCon was on the implementation of containers (and Docker specifically) on the Microsoft Windows Server platform. The Microsoft presentation was given by John Starks and Taylor Brown , although John did the majority of the presenting. You can find the video online here . Implementing containerisation for Windows has obviously been hard. There are some major challenges to overcome, including implementing the equivalent of namespaces, resource controls and a union file system.
One of the most difficult challenges is in understanding the difference in the way Linux/Unix and Windows manage system calls. Linux uses syscalls, however this functionality isn’t exposed or documented by Microsoft. Instead, system calls are managed through DLLs, that cover even basic functionality like DNS lookups. The result is that even an empty Windows container has many background system processes running and the architecture means that the calls can’t be made directly to the underlying host operating system (this is my assumption from the content of the presentation). Rather than expose all of the container functionality directly, Microsoft has implement a shim layer that exposes functions up to the Docker Engine. This is known as the Host Compute Service Shim and is open source. You can find it on GitHub here .
There are a number of other issues Microsoft had to overcome including:
- Image Size. If you’re a regular Microsoft follower, you will know that for some time, Microsoft has been providing non-GUI versions of their operating system, in both a “core” format and more recently an even more cut-down version called Nano. These are obviously a perfect fit for being run as containers and managed remotely, using either PowerShell or standard GUI management tools. However, Nano is a 600MB+ download and Server Core is around 9GB, which is not really practical for public downloading over the Internet.
- Namespaces. Microsoft created something called silos, an extension of the existing Windows job object (I had to dig out my Win32 books for a refresher – here’s an online link ). Silos allow virtualisation of multiple namespaces including the Registry, process IDs, file systems and the network. Some of the changes here aren’t trivial. For example, each container needs to see a separate copy of the Registry and that means creating a copy/image each time a container is launched. This has performance implications on startup, with Windows containers taking longer to instance than Linux-based ones.
One of the interesting points here is that there is an object namespace hidden from normal users (and even developers) that maps objects (such as drives, the registry and network devices) to a single object tree. What this means is multiple silos can be created within the same tree hierarchy, with a silo mapping to a container. This allowed Microsoft to cut back on the objects mapped to each container to only those functions required to run the container. Having said that, the demo (around 22:41 in the YouTube video) still shows a mapping to a printer device. In the hierarchy, a job ID maps to a container.
NTFS & Union File System
Implementing a union file system (UFS) onto NTFS was another challenge. A UFS allows multiple Docker images to be based off the same core image, with each layer representing a change in the files on the image itself. Those changes may introduce an application or be configuration settings. The idea of layering allows an image to be both reused as the basis for another image and to save space and time downloading images; if a base images is already downloaded, only the changes to the later image(s) have to be downloaded.
Microsoft decided not to create a completely new UFS layer on NTFS, but instead provides each container its own NTFS file system. Individual files are then linked back to the source (on the host) via symlinks (or reparse points ). A new container NTFS file system simply contains a little bit of metadata and over time will build up with changes as new files are written to the container. Incidentally, implementation of the Registry for containers is based on a UFS, which was done to reduce the amount of cloning required to create a full set of Registry hives per container.
Kicking the Tyres
Container support is already available in the upcoming releases of Windows Server 2016 TP5 (W2K16TP5) and Windows 10. Both of these are still previews and not final code, so the implementations may change from where they are today. I’ve tried out Docker on W2K16TP5 and at this stage it’s a little clunky to get installed and running (for instance there’s no working equivalent of docker pull yet). However once there, the look and feel is just like Docker on Linux. There’s additional background and details on the installation process available here .
The Architect’s View
Microsoft seems really committed to getting container and Docker integration working and there’s probably good reasons for this. Windows has always had issues multi-tasking multiple applications on a single O/S instance. Anyone who has experienced “DLL Hell” will know exactly what I mean. Containers provide the avenue to run multiple applications per container and avoid some of the DLL clash issues. Each can be lightweight and based on Nano where possible. This opens the door for the ability to run Windows much more efficiently and without the need to deploy as VMs that attract all the management overhead.
However there are still some unanswered questions. How will licensing work? Do I buy a single Windows licence that lets me run as many container instances as I like? Will applications work directly with Windows Containers? How much vendor customisation will be necessary? It’s early days, but I like where Microsoft is headed.