boycott docker

[what you can do] [see also]


What is Docker? Docker is aimed to be a helpful convenient tool for simplifying virtual machines/containers creation and managing. Unfortunately it brings much more annoying unexpected traps and problems. Docker is intended to be a pile of lightweight simple tools that rely on existing infrastructure available in operating system, that tries to provide transparent virtualization layer for running dockerized software above it. Actually most software has to be heavily modified to be run under those conditions, dictated by Docker operating environment (let's call it DockerOS).

Who uses Docker? Docker is vendor lock-in technology convenient for, and spreaded by cloud computing corporations. It is not helpful and easy to work with for free software developers willing to make their software portable, easy to debug and support.

Best features of Docker: (source)

Better than VMs?

controversy virtual machines docker
Image sizes are very large Actually an empty out-of-box chroot image (after debootstrap for example) is the same in all cases for all VMs and even for Docker. Docker has lower memory footprint, because of forcing to run only single process per container: do not expect to to meet either SSH, or syslog, or NTP client, or cron daemon or any other well-known similar software in it.
VMs consume significant CPU That depends on underlying technology. Hardware assisted KVM with virtio capable operating system has exceptionally low overhead and context switching times. You can run network hungry applications in those environments with speeds comparable to direct hardware excecution. In some cases CPU hungry applications can benefit from lightweight containers. But heavy IO and network usage will consume more resources because of docker-related abstraction layers requiring kernel and userspace context switches.
Such virtualization can sometimes have the advantage of lower memory usage and faster startup times, but at the cost of reduced security, stability and compatibility (source).
Competing VM environments don't play well with each other One can run raw binary images under KVM, bhyve, VirtualBox, qemu, VMware and possibly other ones without any catches at all. At least KVM, qemu, VMware and VirtualBox are not required to be executed under GNU/Linux: competing choices are available (FreeBSD for example). Docker images can be run only under dockerized GNU/Linux. Let's call it DockerOS. Moreover, as a rule, you will require specific Linux-kernel version compatible with provided images.
VMs were designed with machine operators in mind, not software developers Possibly, who knows. In the case of KVM, software developer hardly knows if his program is running under either real hardware, or virtual one. Docker is designed with cloud computing providers in mind exclusively. Software developers forced to vendor lock-in their software, forced to make it workable under strict restrictions like single process per container. Some software needs to be rewritten: for example Postfix consists of several daemons and all of them are a part of single monolithic abstraction from software developer field of view (that is incompatible with the docker's one).

Plays well with others?

controversy in reality
Docker does not require you to buy into a particular programming language, framework, packaging system, or configuration language. Except environment constraints, behaviour and overall architecture. Your application has to be Docker locked-in. FreeBSD Jails, VServer, OpenVZ containers are just an isolated chroot with possible network stack separation: they do not impose any specific conditions for your software.
Is your application a Unix process? Does it use files, tcp connections, environment variables, standard Unix streams and command-line arguments as inputs and outputs? Then Docker can run it. My application uses UNIX domain sockets, TCP and UDP connections without predefined port ranges (it is FTP). Also it likes to fork and change it's process permissions, like many daemons do. Docker won't be able to run even Postfix (or FTP daemons): bright example of very high quality, clever, very secure with UNIX-way in mind application.
Can your application's build be expressed as a sequence of such commands? Then Docker can build it. Like absolutely any shell script with that commands sequence. Docker is able to run shell script? Great!

No way to escape dependency hell

we want in reality
Deterministic builds. Do your own, it is not docker business.
Build automation. If you will run trivial shell-script, Docker can execute it. Nothing more, nothing less.

The only dependency hell Docker is trying to escape is inter-daemons dependencies. Like that software requires specific PostgreSQL version, and Redis version. It does not dip into either library or package dependencies at all. All this burden is left to software developer. Exactly that kind of hell is likely to be avoided.

In reality Docker won't help much even in that task. Software developer wants to work, test and deploy software image that will be run finally in production. Ideally it will be byte-by-byte copy of it. At least one wishes ability of deterministic images building: if underlying Debian packages versions are updated from time to time -- this should not affect image/container building at all, without evident developer intrusion.

All that tasks are left on developer's shoulders. Guix package manager solves them, but Docker can not be integrated with it (no API, hooks, whatever).

Debug hell

we want in reality
Transparency for software developer.
  • Only available interaction is containers API, that are not transparent at all.
  • Only single process per container architecture is acceptable.
  • stdout is buffered and does not behave like standard pipes and terminals.
  • Multithreaded, forking applications are not expected to work as you would like to.
  • Immutable code and mutable data separation dictatorship.

Do you wish to write your new application from scratch, dockerize it at the very beginning and observe almost ready production environment? Forget about it, do not fool yourself. You process is isolated and you can talk with it through Docker created layers. It won't allow deploying SSH inside it, won't allow to create bunch of temporary files you wish to observe so much. If you daemon is intended to be controlled through some kind of socket with special utility for communicating with: you have to dockerize it too. Netcat: dockerize it first. Socat, tcpdump: same story again.

Docker adds an intrusive layer of complexity which makes development, troubleshooting and debugging frustratingly difficult, often creating more problems than it solves. It doesn't have any benefits over deployment… (source).

Deployment hell

we want in reality
Minimal barrier of entry for system administrators. OS running Docker becomes DockerOS. It is not UNIX-like OS. All you default standard commands like ps, ls, find, netstat, sockstat, tail and similar are useless here. You have to learn all that bunch of yet another new commands with totally different behaviour.

What daemons are so resource hungry? Who takes huge amount of sockets? By whom this file descriptor is taken? Who owns that network interface? Which daemon will answer me on that address:port? All those questions can be answered by system administrator at once. Because all that UNIX-like operating systems are very similar and have identical or resembling commands. But no sysadmin will be able to do that under DockerOS. All his knowledge is nothing in that environment. Everything must be done using completely different approaches and vendor (Docker) specific tools.

Running VMs under an ordinary OS is nothing exceptional. Just another process, maybe newly appeared network interface in predefined bridge. DockerOS will thrash and rape you iptables rules, network interfaces hierarchy and routing tables. In fact you can not expect to run anything else that is related to network on DockerOS machine. It is ok for big cloud computing datacenters, where firewalls and routers are external.

If your development workflow is sane, then you will already understand that Docker is unnecessary. All of the features which it claims to be helpful are either useless or poorly implemented, and it's primary benefits can be easily achieved using namespaces directly. Docker would have been a cute idea 8 years ago, but it's pretty much useless today. (source).

Reliability hell

we want in reality
Time proven, KISS.
  • bridge, routing, NAT, userland proxies layers.
  • Even latest stable Debian distribution contains already obsolete Linux kernel for Docker.
  • aufs, btrfs: completely new creations, too young to discuss about and suicidal in mission critical deployments without any doubts.

Time proven actively running for years software is a synonym of reliability and maturity. Only that kind of software (never bleeding edge) is used in mission critical cases. That is why exceptionally in military fields we can see so old (but proven and mature) Linux-kernels.

Another synonym of reliability is simplicity. KISS principle has always been worked. That is why Docker tries to do its best by making a multi-layered mess of iptables, NAT, bridges, cgroups, btrfs, aufs, device mapper, self-written TCP proxies and stack of utilities to drive all of that.

You can not easily and directly access your durable non-volatile data storage (it is separated). Do you trust rewritten aufs? Can you trust btrfs (marked as stable only in August 2014) at all? Moreover there is no way to control file system page caches among containers: it is not likely aimed to be used with high IO rate storages.

Network ancient ages step back

we want in reality
IPv6, death to IPv4 NAT, proxies, addressing scheme used in Docker is fully incompatible with IPv6.
Death to NAT, in all cases If you see DockerOS, then be sure that NAT is enabled.
High performance Multiple network layers, userland context switching and NAT will force you to forget about speed and low delays.
Leave our firewall rules alone! Nope.

For a long time even so hated (by UNIXmen, hackers, and so on) Microsoft Windows Server plays well with IPv6 and suggests to build networks exclusively with that modern superior network protocol. iOS, Android, all modern GNU/Linux, *BSD systems work with IPv6 out-of-box well. It is beautiful and very convenient for network administrators protocol. All software must be capable to deal with that protocol everywhere. For datacenters particularly there is no reason to avoid its usage internally. But we are talking about Docker: it does not support IPv6 at all. You can not run IPv6 capable software under it: only by fully manual interaction and configuration.

For ages people have hated NAT. People hate spammers and NAT technology. This is the thing that should not been even born. It is a dirty ugly hack ruining the Internet architecture principle. No NAT, burn it, use IPv6, period. Unfortunately Docker use it intensively out-of-box. It is NAT locked-in. Worse yet: Docker also likes to use userland TCP proxy per container. At least NAT works at kernel context level (staying very CPU hungry anyway). Performance and response times are doomed.

The choice of NAT was an awful decision, but having identified that too late, they proceeded to >create userland garbage-collected NAT clone.

Security. Never heard of it

we want in reality
  • Repositories authentication.
  • Images authentication.
No authentication at all. Just trust Docker Hub.
SELinux, AppArmor friendliness. Docker does not deal with them at all. It is easier to say that it is impossible.

For a very long time PGP is used extensively for signing distribution package files. If you trust your vendor, then cryptography will deterministically reliably trust it's binary distributions (OpenBSD has it's own package signing tools). In Debian, Ubuntu you have to import those signing keys first, to be able to fetch repositories. Docker is much more simple: it does not do any either repository or image verification at all. If there was any security in mind, then no one will thought about using binary images from Docker Hub, because no trust is assigned and linked to them. Unauthenticated repositories are acceptable at intranet level at most.

Docker does not know anything about either SELinux or AppArmor MAC frameworks, that are mandatory in high secure grade environments. It does not provide any helpers to work with them. That also means cloud datacenters do not worry much about valuable consumer data: cgroups isolation is good enough for them. Secure environments? Not applicable in DockerOS.


What can you do


See also


You can send something to admin boycottdocker dot org.

Last updated: 2016-06-08 11:00.