Using Terraform workspaces with an AWS S3 backend

CJ Hewett
4 min readOct 30, 2023

Terraform workspaces simplify having multiple environments with the same backend. They also improve developer experience by making it easier to create new workspaces, and switch between them using CLI commands.

Code for this article can be found here:

Set up S3 backend resources

An S3 bucket will be created to store the Terraform state files for every workspace. We will also enable object versioning on the bucket, so that we can retrieve past versions of the Terraform state if we need to role back. Server side encryption is turned on for security.

A DynamoDB table is also created to allow the Terraform backend to handle state locking, to protect the state from concurrent plans or applies, allowing a team to potentially work together on this, as well as CI integration. Deletion protection is also enabled so that the table does not get accidentally deleted. Remember to set this to false and do an apply when cleaning everything up afterwards, otherwise Terraform won’t be able to delete the table.

The count to only create if the current selected workspace is default, is not necessary, only if you’re creating other workspaces in the same directory (which we won’t be doing this time around), as we only need one backend.

backend_res.tf

resource "aws_s3_bucket" "backend" {
count = terraform.workspace == "default" ? 1 : 0

bucket_prefix = "terraform-backend"

tags = {
Name = "Terraform Backend"
Environment = terraform.workspace
}
}

output "backend_bucket_name" {
value = aws_s3_bucket.backend.0.id
}

resource "aws_s3_bucket_versioning" "backend" {
count = terraform.workspace == "default" ? 1 : 0

bucket = aws_s3_bucket.backend.0.id
versioning_configuration {
status = "Enabled"
}
}

resource "aws_s3_bucket_server_side_encryption_configuration" "backend" {
count = terraform.workspace == "default" ? 1 : 0

bucket = aws_s3_bucket.backend.0.id

rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}

resource "aws_dynamodb_table" "terraform_lock" {
count = terraform.workspace == "default" ? 1 : 0

name = "terraform_state"
deletion_protection_enabled = true
read_capacity = 5
write_capacity = 5
hash_key = "LockID"
attribute {
name = "LockID"
type = "S"
}
tags = {
Name = "Terraform Backend"
Environment = terraform.workspace
}
}

Run a terraform apply to create the resources. After the resources are created, there should be an output section with the name of the S3 bucket created, this will be used in the backend.

Migrate the state to the S3 remote backend

replace the bucket value with the output from above.
Change region to suit your needs, unfortunately variables are not accepted here, so we can’t use var.region .

backend.tf

terraform {

# Comment this out when initialising resources.
backend "s3" {
region = "ap-southeast-2"
bucket = "terraform-backend20231013052848766400000001"
key = "services/tf-workspace-ex/default.tfstate"
dynamodb_table = "terraform_state"
}

required_providers {
aws = {
source = "hashicorp/aws"
version = "5.11.0"
}
}
}

provider "aws" {
region = var.region
}

variable "region" {
description = "AWS Region"
default = "ap-southeast-2"
type = string
}

Once the above backend.tf file is created, run terraform init . Since the current state for the S3 and DynamoDB resources is in a local terraform.tfstate file, it will ask if you would like to copy state to the remote backend, select yes.

✅ The remote backend is finished, we can now start creating environments and Terraform code for our project resources.

Create a project that uses the backend and Terraform Workspaces

Workspaces are a great way to define infrastructure once, and create multiple environments off of it without needing to change the configuration in the backend block.

Make a project directory in the same directory as the existing Terraform backend resources.

Note: You don’t need to create a project directory, you could do everything within the current directory. If you go down this route, I recommend not creating resources within the default workspace, and using that only for the backend state resources.

Move into the project dir with cd project and add some new Terraform resources here for your project.

Create another backend.tf file too, the only thing slightly different to the previous one above is the key, to make sure the state file in the bucket has a different name:

terraform {

backend "s3" {
region = "ap-southeast-2"
bucket = "terraform-backend20231013052848766400000001"
key = "services/tf-workspace-ex/project.tfstate"
dynamodb_table = "terraform_state"
}

required_providers {
aws = {
source = "hashicorp/aws"
version = "5.11.0"
}
}
}

provider "aws" {
region = var.region
}

Run terraform init and then create a new workspace for a development environment with terraform workspace new dev .

Apply the terraform with terraform apply -var-file=./vars/dev.tfvars . Note: I’m using var files for each environment, so leave that part off of the command if you aren’t.

Create a new workspace for a test environment terraform workspace new test .

Apply the terraform for the test environment terraform apply -var-file=./vars/test.tfvars .

View workspaces with terraform workspace list .

Select dev workspace again with terraform workspace select dev .

You’ve now created resources for a Dev and Test environment, with the Terraform state stored remotely in an S3 bucket.

Clean up

Run terraform destroy -var-file=./vars/dev.tfvars and switch to Test with terraform workspace select test and run terraform destroy -var-file=./vars/test.tfvars. This cleans up all the resources in the project directory.

Move up one directory to where the backend resources are and disable deletion protection on the DynamoDB table if need be. (set this to false in the backend_res.tf file and do an apply ). Manually delete the contents in the S3 bucket from the AWS console, as Terraform cannot destroy non-empty buckets.

Run terraform destroy . Everything should now be cleaned up 👍.

I hope you found that useful!
Thanks for reading, cheers 🤙

--

--

CJ Hewett

🛹 Skateboarder. 🏂 Snowboarder. 🏄 Websurfer. I write monthly* about Cloud/DevOps/IoT. AWS Certified DevOps Engineer and Terraform Associate