Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Out of interest, what control plane do you use for a Hetzner/metal setup? Kubernetes ecosystem?

I use Coolify for side projects, haven’t investigated whether I’d want to use it for bigger/importanter stuff.





A surprising number of solutions can be realized in ways that don't actually need much of a control plane if you introduce a few design constraints.

But if you do need one, I guess Kubernetes is perhaps the safe bet. Not so much because I think it is better/worse than anything else, but because you can easily find people who know it and it has a big ecosystem around it. I'd probably recommend Kubernetes if I were forced to make a general recommendation.

That being said, this has been something that I've been playing with a bit over the years. I've been exploring both ends of the spectrum. What I realized is that we tend to waste a lot of time on this with very little to show for it in terms of improved service reliability.

On one extreme we built a system that has most of the control plane as a layer in the server application. Then external to that we monitored performance and essentially had one lever: add or remove capacity. The coordination layer in the service figured out what to do with additional resources. Or how to deal with resources disappearing. There was only one binary and the service would configure itself to take on one of several roles as needed. All the way down to all of the roles if you are the last process running. (Almost nobody cares about the ability to scale all the way down, but it is nice when you can demo your entire system on a portable rack of RPis - and then just turn them off one by one without the service going down)

On the other extreme is having a critical look at what you really need and realize that if the worst case means a couple of hours of downtime a couple of times per year, you can make do with very little. Just systemd deb packages and SSH access is sufficient for an awful lot of more forgiving cases.

I also dabbled a bit in running systems by having a smallish piece of Go code remote-manage a bunch of servers running Docker. People tend to laugh about this, but it was easy to set up, it is easy to understand and it took care of everything that the service needed. The kubernetes setup that replaced it has had 4-5 times the amount of downtime. But to be fair, the person who took over the project went a bit overboard and probably wasn't the best qualified to manage kubernetes to begin with.

It seems silly to not take advantage of Docker having an API that works perfectly well. (I'd research Podman if I were to do this again).

I don't understand why more people don't try the simple stuff first when the demands they have to meet easily allow for it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: