Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

(Hi, I’m one of the authors of the article at the root of this thread.)

Considering your hypothetical stateless microservice change in the same root module as stateful services, problems arise when _someone else_ has merged changes that concern the stateful services, leaving you little room to apply your changes individually.

It’s also worth remembering that, even if a stateless service and a stateful service are managed in the same root module, applying changes is absolutely not atomic. Applying tightly coupled changes to two services “at the same time” is likely to result in brief service interruptions, even if everything returns to normal as soon as the whole changeset is applied.



Not who you replied to, but I was wondering if I could get your take on my use case.

I work on the systems test automation side of things. I use TF to deploy EC2 clusters, including the creation of VPC, subnet, SG, LBs, etc. Once done, I tear down the whole thing.

But from what I'm hearing from yourself and others in this thread, it sounds like I could (/should) be breaking those out separately, even though my use case isn't dev/test/prod(/prod2,/prod3) or even multi-regional.

To rephrase, it sounds like it might be useful for me to be able to create some separation, eg, tear down the EC2 instances in a given VPC/subnet while I ponder what to do next and still leaving the other AWS resources intact, for example. Maybe even deploy another subnet to the same VPC, but a different test intention. I know I can simply specify a different AMI and run tf apply and get one kind of desired deployment change.

Bigger picture: when I need to run one test, I'll copy a dir from our GH repo, edit the tfvars, and kick things off. Another test (in parallel, even), I'll do the same but to a fresh dir. (Wash, rinse, repeat.) And I suspect you're already cringing :) but I get why. It makes me cringe. Ideally I'd be working with a single source of truth (local GH fork).

There's also the consideration of possibly making edits to the stack while I deploy/destroy, while at the same time wanting stability with another deployment for a day or two. I suppose that would require having 2 copies of the remote GH repo. Which is several copies fewer than I'm working with these days.

Fwiw, I've already got "rewrite the stack" on my todo list, but can't get to it for probably another 2 months. So I'm eagerly collecting any "strive to do X, avoid Y like the plague" tips and recommendations prior to diving into that effort.


Ok I think we're talking about two separate things here - you're referencing a root module and not a "stack", as in a stack is a full service/application that uses multiple modules to deploy. Your db module, eks module, etc. All independent modules, not combined into one singular module. Say it's sitting in the /terraform/app1/services/db(&)app folders type of scenario.

I think you're talking about putting stateful and stateless objects inside of a single module. So you've got /terraform/modules/mybigapp/main.tf that has your microservice + database inside of it.

If I'm right and that's what you mean that's really interesting I don't think I've ever seen or done that but now I'm curious. I'm pretty sure I've never created an "app1" module with all of its resources.

Am I totally off here?


I stuck with my typical term, root module, synonymous with how folks are using “stack” and “state” in various parts of this thread.

A module is any directory with Terraform code in it. A root module is one that has a configuration for how to store and lock Terraform state, providers, and possibly tfvars files. Both modules and root modules may reference other modules. You run `terraform init|plan|apply` in root modules.

I think my comment makes sense in that if you mix two services into the same root module (directly or via any amount of modules-instantiating-other-modules) you can end up with changes from two people affecting both services that you can’t easily sever.

Happy to clarify further if I’m still not addressing your original comment.


@rcowley -- I'm going to preface this with I'm a Staff SRE at an adtech corp that does billions and have been a k8s and terraform contributor since 2015 (k8s 1.1 I forget the tf versions). I don't mean this to brag I just want to set my experience expectation since I'm a random name on hn who you'd never know.

I think calling a service/stack (or whatever, app, etc) a "root module" is a very, very confusing thing to do. Terraform has actual micro objects called modules. We work with them every day. I get how you could consider encompassing an entire chunk of terrafrom code that calls various modules a "root module".. but I think this is just going to lead to absolute confusion to anyone not familiar with your terminology. I don't know every TF conversation but I can't think of a single time where I've heard root module in that context. Very good chance I've just missed those conversation and am ignorant to them.

I'm currently hiring SRE 2s and 3s so I've been interviewing lots of terraform writers and one of my tech questions is to ask someone what makes them to decide to write a terraform module and what type of modules they've written - it's always ALBs, EKS, dbs, etc. components indepedently that go into creating a service/stack. I've definitely not heard anyone mention that they write "root modules" in the sense of an entire service/stack.

I don't mean you're right or wrong, maybe more people are aware of that verbage than I am. I just wanted to mention that in my personal case I think it's confusing so I would assume that there are a lot of people in my shoes who would also be confused by it.


Root module is the official terminology used in Hashicorps own documentation. That's actually the term I'm most familiar with in my own experience.

https://developer.hashicorp.com/terraform/language/modules/s...

> Every Terraform configuration has at least one module, known as its root module, which consists of the resources defined in the .tf files in the main working directory.


My 2ç as also a (very minor) terraform core, and numerous providers & modules contributor, and user (also 2015 I think): I've never heard of 'terraform stacks' before this thread, but 'root module' makes perfect sense to me:

1) without the context of where the state is/its contents, or estimating based on the resources/style/what's variable vs. hardcoded, a 'stack' (if you like) is indistinguishable from a module that's to be used in someone else's 'stack';

2) `path.root`


lol That's interesting. I feel like all of the TACOS (spacelift, env0, atlantis) refer to stacks.

Thanks for your response It's great to hear corroboration


https://developer.hashicorp.com/terraform/language/modules#t... is what Terraform calls the module in the current directory, to distinguish it from child modules you might introduce.


Ok I may absolutely have the dumb today I appreciate the response. The way this is worded, because of this line - "Modules are the main way to package and reuse resource configurations with Terraform." reads like - "I have 10 golang apps, they all at a minimum use these same services, this is our "golang root module". But some services might have more or less modules, ie service A uses redis, service B uses kafka without redis.

So in this verbiage, is every single "stack/app" a "root module" and if one of them has a different database/whatever module it's just using different child modules and the child modules are the big differentiator?

Just to kind of prove the root-module argument I'm making here, this post in here is confused on calling a "stack" a module as well https://news.ycombinator.com/item?id=37005949


Glad we cleared up our terminology! I agree that “root module” risks ambiguity, just like you point out.

I just realized I never responded to the very last point in your original comment. I don’t have, and I don’t think Terraform has, a complete solution to dependencies between root modules. Fortunately, data sources will at least fail if they don’t find what they’re looking for. For me, these failures never come up in production since I’m never using `terraform destroy` there. It does come up in pre-production testing and that’s an area that seems rich with patterns and products on top of Terraform that are controlling order and dependence between root modules.

PS thanks for your work on Terraform and Kubernetes.


Use 'terraform destroy' during CI phase. That is your pre-prod.


Root-module: contains resources, sub-modules, incl. remote module calls.

Stack: deployable with hardcoded tfstate and tfvars configs.


> it's always ALBs, EKS, dbs, etc. components indepedently that go into creating a service/stack

More importantly, modules represent the DRY principles. We host them in our private Terraform registry and share between teams




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: