Member-only story
Packaging Python requirements as an AWS Lambda Layer with Terraform

It can be easier to create a Lambda layer, and let Lambda functions consume the layer, than attempting to package libraries up with your code in the zip using Terraform. This makes development easier when multiple Lambdas share the same dependenceis. I’ve also found that when packaging the dependencies in a Lambda with Terraform, it will end up packaging the Zip each time when run in a CI pipeline, and then deploy the Lambda, even when there are no changes (This is usually because the null_resource
object has a trigger on the Zip of the Lambda code that also contains dependencies, instead of just a python file).
Code for this post can be found here:
A null_resource
is used to install the Python dependencies in a requirements.txt
file to a directory called python/
and create a Zip of this directory called layer.zip
. This Zip file is rebuilt each time the requirements.txt
file is changed, by using the triggers
argument. All these files will exist in a folder named lambda_layer
that is in the same directory as the following Terraform code. The Zipping is done with the null_resource
, because I’ve found that the Terraform archive_file
resource does not zip it properly and will overwrite the python
directory name with whatever the name of the Zip ends up being, which does not work in the Lambda Layer, as the directory name must be python
.
The Zip is uploaded to an S3 bucket, and a Lambda Layer is created off of this S3 object. New versions of the layer are created when the S3 bucket object is updated with newer Zip files of packaged dependencies.
Note: An S3 bucket is not needed, you could upload the Zip directly to the Lambda Layer with the filename
argument instead of using s3_bucket
and s3_key
.
#define variables
locals {
layer_path = "lambda_layer"
layer_zip_name = "layer.zip"
layer_name = "lambda_layer_${var.environment}"
requirements_name = "requirements.txt"
requirements_path =…