1

hope to find you well. I am trying to experiment with the scenario I had to tear down my cluster and rebuild it. However, since this is a production set up, I want to retain the EFS and mount it again if it already exists.

This is my CDK code:

const config = this.node.tryGetContext('stages')[props.STAGE]

        /*  Cluster  */

        const qlashMainCluster = new ecs.Cluster(this, `qlashMainCluster`, {
            vpc: props.vpc,
            clusterName: `${props.STAGE}_QlashMainCluster`,
            enableFargateCapacityProviders: true,            
            defaultCloudMapNamespace: {
                name: `${props.STAGE}qlashMainCluster`,
                vpc: props.vpc,
                useForServiceConnect: true
            }
        })

        // EFS

        const qmmTasksEfsSecurityGroup = new ec2.SecurityGroup(this, 'qmmTasksEfsSecurityGroup', {
            vpc: props.vpc,
            securityGroupName: 'qmmTasksEfsSecurityGroup'
        })

        let qmmTasksEfs: efs.IFileSystem

        if (!config.QlashMainClusterEFSID) {
            qmmTasksEfs = new efs.FileSystem(this, `${props.STAGE}qmmTasksEfs`, {
                fileSystemName: `${props.STAGE}qmmTasksEfs`,
                vpc: props.vpc,
                removalPolicy: cdk.RemovalPolicy.RETAIN,
                securityGroup: qmmTasksEfsSecurityGroup,
                encrypted: true,
                lifecyclePolicy: efs.LifecyclePolicy.AFTER_30_DAYS,
                enableAutomaticBackups: true
            })

            new cdk.CfnOutput(this, 'QlashMainClusterEFSID', {
                exportName: 'QlashMainClusterEFSID',
                value: qmmTasksEfs.fileSystemId
            })
            
        } else {
            qmmTasksEfs = efs.FileSystem.fromFileSystemAttributes(this, `${props.STAGE}qmmTasksEfs`, {
                securityGroup: qmmTasksEfsSecurityGroup,
                fileSystemId: config.QlashMainClusterEFSID
            })
        }

        const qmmRedisEfsAccessPoint = new efs.AccessPoint(this, `${props.STAGE}qmmRedisAccessPoint`, {
            fileSystem: qmmTasksEfs,
            path: '/redis',
            createAcl: {
                ownerGid: '1001',
                ownerUid: '1001',
                permissions: '750'
            },
            posixUser: {
                uid: '1001',
                gid: '1001'
            }
        })

        const qmmMongoEfsAccessPoint = new efs.AccessPoint(this, `${props.STAGE}qmmMongoAccessPoint`, {
            fileSystem: qmmTasksEfs,
            path: '/mongodb',
            createAcl: {
                ownerGid: '1002',
                ownerUid: '1002',
                permissions: '750'
            },
            posixUser: {
                uid: '1002',
                gid: '1002'
            }
        })

        // Redis

        const qmmRedisServiceSecurityGroup = new ec2.SecurityGroup(this, 'qmmRedisSecurityGroup', {
            vpc: props.vpc,
            securityGroupName: 'qmmRedisSecurityGroup'
        })

        qmmTasksEfsSecurityGroup.addIngressRule(
            ec2.Peer.securityGroupId(qmmRedisServiceSecurityGroup.securityGroupId),
            ec2.Port.tcp(2049),
            'Allow inbound traffic from qmm_redis to qmmTasksEfs'
        )

        if (props.qlashMainInstanceSecurityGroup) {
            qmmRedisServiceSecurityGroup.addIngressRule(
                ec2.Peer.securityGroupId(props.qlashMainInstanceSecurityGroup.securityGroupId),
                ec2.Port.tcp(6379),
                'Allow inbound traffic to qmm_redis from qmmMain instance'
            )
        }

        qmmRedisServiceSecurityGroup.addIngressRule(
            ec2.Peer.ipv4(props.vpc.vpcCidrBlock),
            ec2.Port.tcp(6379),
            'Allow inbound traffic to qmm_redis from resources in qlashMainClusterVpc'
        )

        const qmmRedisTaskDefinition = new ecs.FargateTaskDefinition(this, `${props.STAGE.toLowerCase()}qmmRedisTask`, {
            cpu: 512,
            memoryLimitMiB: 1024,
            volumes: [
                {
                    name: `${props.STAGE.toLowerCase()}_qmm_redis_volume`,
                    efsVolumeConfiguration: {
                        fileSystemId: qmmTasksEfs.fileSystemId,
                        transitEncryption: 'ENABLED',
                        authorizationConfig: {
                            accessPointId: qmmRedisEfsAccessPoint.accessPointId,
                            iam: 'ENABLED'
                        }
                    }
                }
            ]
        })

        qmmRedisTaskDefinition.addToTaskRolePolicy(
            new iam.PolicyStatement({
                actions: [
                    'elasticfilesystem:*',
                    'elasticfilesystem:ClientWrite',
                    'elasticfilesystem:ClientMount',
                    'elasticfilesystem:ClientRootAccess',
                    'elasticfilesystem:DescribeMountTargets',
                    'elasticfilesystem:CreateAccessPoint',
                    'elasticfilesystem:DeleteAccessPoint',
                    'elasticfilesystem:DescribeAccessPoints',
                    'elasticfilesystem:DescribeFileSystems'
                ],
                resources: [qmmTasksEfs.fileSystemArn],
            })
        )

        qmmRedisTaskDefinition.addToTaskRolePolicy(
            new iam.PolicyStatement({
                actions: ['ec2:DescribeAvailabilityZones'],
                resources: ['*']
            })
        )

        const qmmRedisContainer = qmmRedisTaskDefinition.addContainer(`${props.STAGE.toLowerCase()}_qmm_redis`, {
            image: ecs.ContainerImage.fromAsset('./resources/cluster-resources/redis'),
            containerName: `${props.STAGE.toLowerCase()}_qmm_redis`,
            portMappings: [{ containerPort: 6379, name: `${props.STAGE.toLowerCase()}_qmm_redis` }],
            healthCheck: {
                command: ["CMD", "redis-cli", "-h", "localhost", "-p", "6379", "ping"],
                interval: cdk.Duration.seconds(20),
                timeout: cdk.Duration.seconds(20),
                retries: 5
            },
            logging: ecs.LogDriver.awsLogs({streamPrefix: `${props.STAGE.toLowerCase()}_qmm_redis`}),
            command: ["redis-server", "/usr/local/etc/redis/redis.conf"]
        })

        qmmRedisContainer.addMountPoints({
            sourceVolume: `${props.STAGE.toLowerCase()}_qmm_redis_volume`,
            containerPath: '/redis/data',
            readOnly: false
        })

        const qmmRedisService = new ecs.FargateService(this, `${props.STAGE}qmmRedisService`, {
            serviceName: `${props.STAGE}_qmmRedisService`,
            cluster: qlashMainCluster,
            desiredCount: 1,
            securityGroups: [qmmRedisServiceSecurityGroup],
            taskDefinition: qmmRedisTaskDefinition,
            enableExecuteCommand: true,
            vpcSubnets: {
                subnetGroupName: props.qmmRedisSubnetGroupName
            },
            serviceConnectConfiguration: {
                services: [{ portMappingName: `${props.STAGE.toLowerCase()}_qmm_redis` }]
            }
        })

What is quirky,is that everything works just fine at the first creation of the stack (when the EFS is created), but when I try to mount it again (as I recreate the stack) it returns the error:

ResourceInitializationError: failed to invoke EFS utils commands to set up EFS volumes: stderr: Failed to resolve "fs-022089502ea31e256.efs.eu-central-1.amazonaws.com" - check that your file system ID is correct, and ensure that the VPC has an EFS mount target for this file system ID.

As you can see from the code, the mount target exists, and as far as I can tell, they get passed on to the old EFS when I launch the stack, isn't it right?

Also, I highly doubt it is an EFS IAM permissions issue, because I am temporarily launching the same stack with all the permissions for EFS.

Lastly, I've checked other issues here in StackOverflow ( AWS ECS (Fargate) is failing to mount EFS file system "Failed to resolve"
Issue with mounting EFS access point from an AWS ECS Fargate task EFS mount in ECS "Failed to resolve" ) which point to ensure that the VPC has 'DNS hostnames' and 'DNS resolution' enabled, which is the case for me.

If you have any ideas or suggestion it would be much appreciated, in the meantime I wish you a wonderful day :)

0 Answers0