For my workload I've struggled to see the advantage containers would give me. Maybe someone here can convince me, rather than the current justification of 'docker all the things'.
We have servers, they handle a lot of traffic. It's the only thing running on the machines and takes over all the resources of the machine. It will need all the RAM, and all 16 vCPUs are at ~90%.
It's running on GCP. To rollout we have a jenkins job that builds a tag, creates a package (dpkg) and builds an image.
There's another jenkins job that deploys the new image to all regions and starts the update process, autoscaling and all that.
If you already have all of that working, why would you change? Containers are valuable for a couple things-
1. Packaging and distribution- it's very easy to set up a known good filesystem using docker images and reuse that. There are other solutions- dpkg plus ansible would be an example.
2. Standardized control- all apps using 'docker run' vs a mix of systemd and shell scripts can simplify things.
3. Let's you tie into higher level orchestration layers like k8s where you can view your app instances as a single thing. There are other solutions here as well.
4. Can use the same image on dev machines as prod instead of needing two parallel setup schemes.
If you already are happy with your infra, certainly don't change it. I think once you know containers they are a convenient solution to those problems, but if stuff is setup they already missed their shot.
We have servers, they handle a lot of traffic. It's the only thing running on the machines and takes over all the resources of the machine. It will need all the RAM, and all 16 vCPUs are at ~90%.
It's running on GCP. To rollout we have a jenkins job that builds a tag, creates a package (dpkg) and builds an image. There's another jenkins job that deploys the new image to all regions and starts the update process, autoscaling and all that.
Can containers help me here?