Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't fully understand your use case, but you can zip/package your lambdas using a null_resource TF resource too?


I tried it, doesn’t work - Lambda resource needs hash sum to compare against previous deployment to trigger updates of the source bundle, and Terraform needs a file to be present during plan stage. With null_resource the file is created after plan, during apply. To workaround this I tried to provide something else to bundle hash sum (like hash sum of source files, not bundle), but the value that TF keeps in the state is the one returned from AWS Lambda API, not the one you supply it; so it causes resource update on every apply, this is not what I wanted.

Your comment made me think of trying to skip bothering with Lambda’ hash sums and use custom refresh triggers instead, initiated by null_resource. Will do after holidays.


It can work. I wrote a script in Python to build stable zip files. The main thing is to ensure any timestamps are reset to zero and the files are added in a fixed order. It also filters out any unwanted files and directories to minimise the size of the bundle. I have a wrapper around it that generates the hash and uses pip3 (as I use Python for my lambdas) to download the dependencies and build the directory hierarchy for the lambda/layer, runs the stable zip script, and returns the path of the bundle and the hash.

This didn't take long to write, and reduced the amount of churn we had with our deploys. We had massive problems with one particular set of lambdas due to the sheer amount of code (mostly unavoidable dependencies, but shared, so they could go in a layer), and our deployment times plummeted to practically nothing after I knocked this together.

I'm not sure I can share the code as it's something I wrote for work, but it ought to be simple to recreate from the description above.


Yes. If we use a null_resource that has the hashes of the source code files as a trigger, then in the `local-exec` provisioner of the null_resource, we can run the build. The build can also be run remotely (we use google cloud build) to be independent of the developer's machine architecture and operating system, which is important for native dependencies. Terraform will not re-run the null resource provisioner so long as the source code does not change, there is no need for a reproducible build.


For various reasons (mainly auditing purposes, but it also reduces any incidental infrastructure churn, and makes it easier to guarantee a rollback happened as expected), we need to ensure reproducibility, so it's a bit more important for us that we guarantee the artifacts produced are exactly what we expect.


Instead of using source_code_hash you can push your code to s3 with the hash as the filename and update the lambda to point at the new file

Terraform can manage uploading objects to s3

It also seems a bit strange to have Terraform do the packaging. We do that in CI for most of our lambdas to ensure the test suite runs, linting, etc then it creates a zip at the end and pushes to S3

The only ones Terraform deploys directly are fairly trivial Python API "glue" lambdas


After reading this comment I decided to write a small blog how you could tackle this nevertheless https://smetj.net/deploying_aws_lambdas_with_terraform.html


Alternatively you could use the "archive_file" [0] resource provided by terraform. I use this resource to zip up my lambda source files and then use the hash of the zip file to determine if my application should be redeployed.

[0] https://registry.terraform.io/providers/hashicorp/archive/la...


This works fine only as long as you do not need a step to build or download dependencies, like `npm install` or `pip install`, as part of your run of terraform apply. Otherwise, a more complex solution is necessary, like talideon's above.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: