Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Ever tried to security update a container?

The whole point of using a container is that you can destroy it and build a new one easily. The new one should be built using up-to-date packages with security patches applied (and tested, obvs). Using the 'pets versus cattle'[1] analogy, patching a container feels like you're treating it like a pet. You should just kill it and get a new one instead.

[1] https://thenewstack.io/how-to-treat-your-kubernetes-clusters...



The whole point of having a distro with a good reputation is that you can incrementally outsource your trust to them, one package at a time. OpenSSL needs a fix? One package update per machine, and if you have lots of machines, you update that.

With VMs, you can use the same tools you use with any other machine.

With containers, you are responsible for figuring out which ones need which packages and how to re-build them. Your method is likely to be idiosyncratic, so you not only need to be your own security team, you need to be your own packaging maintainer and infrastructure maintainer.

The two extreme versions of this are the neglect model, in which you just don't care about anything, and the bleeding-edge model, where you automatically build with the latest versions of everything pulled from their origins. Both ways end up with unexpected security disasters.


I don't understand why you think it's hard to tell your CI/CD pipeline to run itself on a schedule. I find this totally consistent with my tooling (I have CI/CD in place for many different reasons, this is one of them) and I don't know how this CI/CD cron makes me "packaging maintainer and infrastructure maintainer".

"With containers, you are responsible for figuring out which ones need which packages and how to re-build them."

Yeah, you need to know how to rebuild your world in ANY CASE. It's not an argument against containers that they require an approach that is the best practice.


The "one package at a time" model assumes you care about per-instance uptime and bandwidth costs. The "reset the world" model assumes rebuilds and reboots are cheap enough not to have to.

I don't think either is necessarily wrong; there's not a huge distance between `apt-get install unattended-upgrades` and a daily (weekly, whatever) container rebuild except that you need to cron the latter. Unless you're using a different OS between the two cases, they can use the same package database, the same tooling, and so on.


You need to make the decisions, keep track, and do the work. If you've got the tools made for you, excellent. Otherwise you also have to build and maintain the tools -- which is my point.


There are no more decisions to make one way or the other. If you're keeping a VM image updated with new packages, that's either a) automated with something like unattended-upgrades, or b) a manual scan of DSA's with decisions as to whether they're important enough to apply and to which boxes they need to be applied. It you're rebuilding a container image, that's either a) a 2am cron job to do a blind rebuild that captures the latest package versions anyway, or b) a manual scan of DSA's with decisions etc etc...

The 2am cron job, if you go that way, is, assuming you've got an automated build anyway, a one-line bit of config somewhere. It's not something I'd usually think the words "build and maintain" would apply to.


If you can build on your distro of choice, you can rely on the same security audit and process. Keep track of the packages you link against and include in your image, and just rebuild your images when a new security update comes along. Then you replace the running one.


See that "keep track" bit? It's expensive unless you build a tool to do it for you -- which is what I said previously.


Pulling the list of security updates is trivial. And during development you'll know what you depend on. Comparing the two in a script is trivial.

But regardless, with containers, it should be a setup where you can rebuild your images daily with all the latest security fixes. Or whenever there is a fix you deem important.


You don't have to keep any more track than you do with your OS; it's just a different button you hit when updating containers.

Or don't keep track at all: Just rebuild and redeploy often enough.


Except the OS is one place, one org. With containers it could exponential.


Supply chain attackers love this.


The easiest analogy is that containers are like binaries. If you find a problem in nginx, you don't go on and patch /usr/bin/nginx, you uninstall that one and replace it with a binary that has the problem fixed.


It depends on where you're getting the new one from.

You could be getting mad cows every time.

I think it would be nicer if you could build containers from scratch that didn't tie into a cloud based website or someone's business model.

Sort of like a Dockerfile, but for the prerequisites. Maybe go even deeper to gentoo level.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: