Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Containers, by themselves? Very little. Containers, as implemented by Docker, Rkt and Systemd? Quite a bit.

Disk - the technologies used for disk isolation (save chroots) are very poor performance, and in some cases can cause resource contention between what would otherwise appear to be unrelated containers. As an exmaple, using AUFS with Node creates a situation where any containers running on the same file system can only run one at a time, regardless of the number of cores. It's silly. Device mapper, on the other hand, is just plain slow (and buggy, when used on Ubuntu 14.4).

Network: The extra virtual interfaces, natting, and isolation all come with a performance penalty. For small payloads, this manifests as a few milliseconds of extra latency. For transferring large files, it can result in up to half of your throughput lost. Worse, if you have two docker containers side by side but due to your discovery mechanisms one container uses the host device to talk to the other container, you create what is known as assymetric TCP, which can cut your performance by a fifth or more. Try it out sometime, it's entertainingly frustrating to figure out.

Security: My favorite. What's the point of creating a container for your application if you're going to include the entire OS (and typically not even bother to update it with security patches). A real simple DOS on docker boxes would be to get the process to fill the "virtual" disk with cruft. You'll impact all running processes, the underlying OS (/var/lib/ is typically on the same device as /), and create such a singularly large file that it's usually easier to drop the entire thing and re-pull images instead of trying to trim it down.

Sorry if I sound down on the tech, but I've been fighting to make this work for production, and all of these little niggles are driving me batty.



I'm having much the same experience with Docker, and talking to other ops folks who get to actually put it into production, they usually have similar experiences.

Docker is fun and great when it's running on your workstation and coddled by your fingers at the terminal, but there's a lot of gotchas and missing parts when it comes to putting things into production, to be taken care of in a hands-off manner. There still isn't an easy way to centralise logs from a container app's STDOUT. Yes, there are other containers you can install to ship logs (which work for the author's use-case, not necessarily yours) or you can hack together something horrible. If you want to look at container logs, you have to have root rights. You can be in the docker group and have full control over the daemon, but the container log location is root only, and is made afresh with every container. (and don't forget to rotate those logs!)

My latest fun with docker is that one of my docker servers, built from the same source image and running on the same configuration plan in ansible as my other docker servers, fails to start docker on boot. Some sort of race condition, I assume. Basically it fails to apply its iptables rules and dies. People talk about making problems go away with docker, but it's a trope in my team that any day I'm working with docker, I'll be spamming chat with problems I'm finding in it from an ops point of view. And I'm just a midrange sysadmin :) But the point is that adding Docker adds an extra layer of debugging. The app stack still needs to be debugged, and now there's an extra abstraction layer that needs debugging.

Plus, in my particular case, there's the irony of using single-function VMs to run a docker container, which is running the same OS version as the VM :) (my devs bought into docker before I arrived...)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: