Connect an AWS ECS Fargate service to an EFS File System with Pulumi

In this article I will show how to connect an ECS Fargate service to an EFS File System with Pulumi, using a MongoDB container as an example. This Pulumi project is done in Typescript, source code can be found here:

Using EFS with a Fargate service means that a container can save state to the file system, so that in the event this container stops, a new container could access the EFS file system.
For example, a MongoDB container running in Fargate, can store data in EFS for future containers to use, otherwise any data stored in the container would be lost if the container was stopped.

The following AWS resources will be created:

  • A VPC with two public and private subnets. (NOTE: The ECS and EFS instances will use the public subnets for sake of an example, do not do this for a production environment as it means the Mongo service is publicly accessible).
  • A security group that allows inbound TCP traffic on port 27017 for Mongo and 2049 for NFS, which is what EFS requires.
  • An EFS File System with two mount targets (one for each public subnet).
  • A Network Load Balancer to listen for traffic on port 27017 (MongoDB default port)
  • A Fargate Cluster and service that run two Mongo containers, with a mount to the EFS File System.

Set up AWS IAM credentials

First, we need some AWS credentials set in the terminal so that Pulumi has the power to create AWS resources.

Make sure you have an IAM user with the necessary permissions to create the resources mentioned above. Generate some Access Keys for this user and use the code below in your terminal, replacing the access key id and secret key with the ones you just generated.

Linux/MacOS

export AWS_ACCESS_KEY_ID=AKIAXXXXXXXXXXXXXXXX
export AWS_SECRET_ACCESS_KEY=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

Windows

$env:AWS_ACCESS_KEY_ID = "AKIAXXXXXXXXXXXXXXXX"
$env:AWS_SECRET_ACCESS_KEY = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"

Create a Typescript Pulumi project

Assuming you have Pulumi installed, create a folder for you new Pulumi project, run the command to create said Pulumi project, and accept the defaults:

$ mkdir pulumi-ecs-efs-example && cd pulumi-ecs-efs-example
$ pulumi new aws-typescript

Open the default index.ts file and remove the current code. Now let’s import the Pulumi packages we need, set up some constant variables, and create our VPC…

import * as pulumi from "@pulumi/pulumi";
import * as aws from "@pulumi/aws";
import * as awsx from "@pulumi/awsx";
const SERVICE_NAME = 'mongo';
const MONGO_PORT = 27017;
const NFS_PORT = 2049;
const vpc = new awsx.ec2.Vpc(`${SERVICE_NAME}-vpc`, {
cidrBlock: "10.0.0.0/16"
});
// Export a few resulting fields to make them easy to use:
export const vpcId = vpc.id;
export const vpcPrivateSubnetIds = vpc.privateSubnetIds;
export const vpcPublicSubnetIds = vpc.publicSubnetIds;
export const publicSubnet_1 = pulumi.output(vpc.publicSubnetIds)[0];
export const publicSubnet_2 = pulumi.output(vpc.publicSubnetIds)[1];

Next we will create the security group for the EFS and Fargate Cluster. This Security Group allows traffic from anywhere for port 27017 and 2049, which is not good practice and should be restricted so that only the Fargate Cluster can communicate to the EFS file system. I have done it this way as a quick and easy example.

// Allocate a security group and then a series of rules:
const sg = new awsx.ec2.SecurityGroup(`${SERVICE_NAME}-sg`, { vpc });
// inbound nfs traffic on port 2049 from a specific IP address
sg.createIngressRule("nfs-access", {
location: new awsx.ec2.AnyIPv4Location(),
ports: new awsx.ec2.TcpPorts(NFS_PORT),
description: "allow NFS access for EFS from anywhere",
});
// inbound Mongo traffic on port 27017 from anywhere
sg.createIngressRule("mongo-access", {
location: new awsx.ec2.AnyIPv4Location(),
ports: new awsx.ec2.TcpPorts(MONGO_PORT),
description: "allow Mongo access from anywhere",
});
// outbound TCP traffic on any port to anywhere
sg.createEgressRule("outbound-access", {
location: new awsx.ec2.AnyIPv4Location(),
ports: new awsx.ec2.AllTcpPorts(),
description: "allow outbound access to anywhere",
});

Create the EFS file system and mount targets, one mount for each of the public subnets

const efs = new aws.efs.FileSystem(`${SERVICE_NAME}-efs`, {
tags: {
Name: `${SERVICE_NAME}-data`,
},
});
// Create a mount target for both public subnets
const publicMountTarget_1 = new aws.efs.MountTarget(`${SERVICE_NAME}-publicMountTarget-1`, {
fileSystemId: efs.id,
subnetId: publicSubnet_1,
securityGroups: [sg.id]
});
const publicMountTarget_2 = new aws.efs.MountTarget(`${SERVICE_NAME}-publicMountTarget-2`, {
fileSystemId: efs.id,
subnetId: publicSubnet_2,
securityGroups: [sg.id]
});

Create a Network Load Balancer and listener for the Fargate Cluster and export the Load Balancer’s DNS name as we will use this to connect to the Mongo database.

// Creates a Network Load Balancer associated with our custom VPC.
const nlb = new awsx.lb.NetworkLoadBalancer(`${SERVICE_NAME}-service`, { vpc });
// Listen to Mongo traffic on port 27017
const mongoListener = nlb.createListener(`${SERVICE_NAME}-lb-listener`, {
port: MONGO_PORT,
protocol: "TCP",
});
// Export the load balancer's address so that it's easy to access.
export const url = nlb.loadBalancer.dnsName;

Create the Fargate Cluster and Mongo Service

// Fargate Cluster
const cluster = new awsx.ecs.Cluster(`${SERVICE_NAME}-cluster`, { vpc });
const mongoService = new awsx.ecs.FargateService(SERVICE_NAME, {
cluster,
desiredCount: 2,
securityGroups: [sg.id],
taskDefinitionArgs: {
containers: {
mongo: {
image: "mongo",
memory: 128,
portMappings: [ mongoListener ],
mountPoints: [
{
containerPath: "/data/db",
sourceVolume: `${SERVICE_NAME}-volume`
}
]
},
},
volumes: [
{
name: `${SERVICE_NAME}-volume`,
efsVolumeConfiguration: {
fileSystemId: publicMountTarget_1.fileSystemId,
transitEncryption: "ENABLED"
}
}
]
},
});

Connecting to the MongoDB instance

Run pulumi up to create all the infrastructure in AWS that we defined above.

Once finished, you should be able to connect to the Mongo service by running mongosh --host mongo-service-xxxxxxxx-xxxxxxxxxxxxxxxx.elb.[REGION].amazonaws.com --port 27017 and insert some data with use demodb and db.people.insert({"name":"Bill Palmer"}) .

Disconnect from the database and kill any running tasks and wait for a new task to spin up and connect to the Mongo database again, run show dbs to see that demodb is still there, and use demodb and show collections to see that the people connection we created before still exists as well.

Run pulumi destroy in your terminal to remove all the AWS resources once you’re done.

NOTE: You may notice that in the Fargate Cluster, only one tasks manages to stay running while the second continuously spins up and crashes. This is because the first Mongo service has a lock on the /data/db/mongod.loc file in the EFS file system. I thought this was an interesting side effect, and definitely something to consider when developing a service that will use EFS.

This was my first time using EFS, not to mention using it with Pulumi, so I hope you found this useful!
Chur🤙

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store