Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Are you keeping your terraform configs in one repo or splitting the configs out and keeping them next to the code they deploy?


I've been part of 2 teams that used aws lambda for their APIs. One team decided that they wanted to use chalice and build custom bash scripts to deploy parts of the system out. (chalice would perform the deployment of apigateway, and get the required IAM policies built out. For all other resources they would use terraform, but they broke it down on a per-resource bases, rather than accepting changes to all the infra in 1 go, there would be a script that runs each resource type independently; buckets, tables, service-accounts, elasticsearch....

The current team does their deployments in 2 stages, first the use chalice to generate the correct terraform files for the 2 lambda's that are deployed one the indexer, the other the service api, then the generate all the other terraform json files using python. After this is complete the infra is deployed in 1 step.

The second team has way less errors on deployment, and managing the infra is way easier, esp if you need to nuke it all and rebuild an environment.

I would break out the terraform backend on a per-environment basis rather than clumping them into folder/bucket, that can be dangerous if the bucket gets emptied.


We’ve been using SAM with TF and I’m really not a fan of it and would prefer to move everything to TF early next year with the core infrastructure managed in the terragrunt way of folders of resource types (vpcs, dbs, etc) and having our code repos have a simple terraform file that instantiates a module and references the core remote states as needed because so far (knock on wood) we rarely have upstream changes to the lambdas configuration / parent environments themselves. Then we can dump SAM and everything can run in one spot in terraform cloud.

We don’t work in Python so I’ve not used Chalice but I’ll see if it can inspire our Go tooling.


Do you reckon the pain points you hit were due to SAM specifically? Or is it just more so how SAM does it doesn't integrate into TF? Thanks. Just curious as I've used SAM to reduce cloud formation boiler plate and we've considered terraform or pulumi, so it would be nice to hear of dragons witnessed first hand :)


We use pyinvoke to orchestrate terraform invocations and feed the outputs to chalice configuration files. We then intercept the chalice deployed resources and feed them back into dependent terraform modules as inputs. It's been incredibly smooth for us and we're able to get deterministic environments that are easy to debug, modify and deploy. It takes a little imperative glue to manage dependencies, but the bulk of the configuration is declaratively defined and I've been incredibly happy with this workflow.


I’d previously done my terraform work with invoke and jinja2 stamping out HCL templates - never tried going straight to JSON but that makes sense. Thanks for sharing — I’ll have to try Chalice just to see how it works.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: