Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You know, if you use Docker for the right reasons, in the right way, it can be a real time saver.

I'm not really sold on the way Docker specifically tries to be fire-and-forget easy (it often succeeds in being silly and you have to work around it), but containers are good.

There's a lot of very useful software that just expects that it can do whatever it wants on a system, like install its own dependencies from git, run its own version of python etc.

You may want to run such software in production and it is an absolute maintenance nightmare if you install all the dependencies via package management. Upgrading starts with rebuilding and repackaging all dependencies. Not fun, and not a good use of time.

For such software, containers make it feasible to administer as somewhat manageable units: You create the Dockerfile or whatever to install the software you want, and run the resulting image on a host, and bring configuration in via puppet or something. Then, when you need to upgrade, you rebuild your image and deploy it on the host; rollbacks are trivial, as is scaling, or creating test environments.

You can of course achieve the same sort of thing with virtual machines providing you have good orchestration. However, often a deployment can consist of multiple software components that are separate units, but can (or must) share a single host.

I think so long as Linux containers don't actually offer real security, that's the niche where they are useful.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: