Using k8s to deploy is easy, setting up a cluster with the 'new' admin command is also straightforward...
Doing maintenance on the cluster isn't. Debugging routing issues with it isn't either, configuring a production worthy routing to begin with isn't easy either. it's only quick if you deploy weave-net and call it a day.
I would strongly discourage anyone using k8s in production unless it's hosted or you have a full team whose only responsibility is it's maintenance, provisioning and configuration
Very few people who suggest using kubernetes are suggesting using kubespray or kubeadm. 99% of companies will want to just pay for a managed kubernetes cluster which, for all intents and purposes, is basically AWS ECS with more features and less vendor lockin.
It should be also known that all "run your code on machines" platforms (like ECS) have similar issues. I remember using ECS pre-fargate and dealing with a lot of hardware issues with the instance types we were on. It was a huge time sink.
> it's only quick if you deploy weave-net and call it a day
That's exactly the benifit of kube. If something is a pain you can walk up to one of the big players and get an off-the-shelf solution that works and spend very little time integrating it into your deployment. No cloudformation stacks or other mess. Just send me some yaml and tell me some annotations to set on my existing deployments.
> I would strongly discourage anyone using k8s in production unless it's hosted or you have a full team whose only responsibility is it's maintenance, provisioning and configuration
If you have compute requirements at the scale where it makes sense for you to manage bare metal it should be pretty easy for you to find budget for 2 to 5 people to manage your fleet across all regions.
Hi. I run my production 7 figure ARR SaaS platform on google hosted k8s. I spend under 10 minutes a week on kubernetes. Basically give it an hour every few months. Otherwise it is just a super awesome super stable way for me to run a bunch of bin-packed docker images. I think it’s saved me tons of time and money over lambda or ECS.
It’s not F500 scale, but it’s over 100 CPU scale. Confident I have a ton of room to scale this.
If you end up making a blog post about how you do your deployments/monitoring and what it's enabled you to do I think it'd be a great contrast to the "kubernetes is complicated" sentiment on HN.
This sounds like fun. Kind of a “how to use Kubernetes simply without drowning”. Though would it just get downvoted for not following the hacker news meme train?
I have heard of people taking "years" to migrate to kube but only on HN and only on company who's timelines for "lets paint the walls of our buildings" stretch into the decades. But, even once you move, you get benefits that other systems don't have.
1. Off the shelf software (even from major vendors [0])
2. Hireable skill set: You can now find engineers who have used kubernetes. You can't find people who've already used your custom shell scripts.
3. Best practices for free: zero-downtime deploys can now be a real thing for you.
4. Build controllers/operators for your business concepts: rather than manually manage things make your software declaratively configured.
Doing maintenance on the cluster isn't. Debugging routing issues with it isn't either, configuring a production worthy routing to begin with isn't easy either. it's only quick if you deploy weave-net and call it a day.
I would strongly discourage anyone using k8s in production unless it's hosted or you have a full team whose only responsibility is it's maintenance, provisioning and configuration