0

I'm working on a use case where I need to copy data from AWS S3 in account1(region1) to AWS S3 in account2(region2). For doing this a have created a lambda function in account1 which will write to AWS S3 in account2.

So the destination bucket in account2 has below bucket policy defined -

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Deny",
            "Principal": {
                "AWS": "*"
            },
            "Action": "s3:*",
            "Resource": "arn:aws:s3:::destination-bucket/*",
            "Condition": {
                "Bool": {
                    "aws:SecureTransport": "false"
                },
                "ForAllValues:StringNotEquals": {
                    "s3:TlsVersion": [
                        "1.2",
                        "1.3"
                    ]
                }
            }
        },
        {
            "Effect": "Allow",
            "Principal": {
                "AWS": [
                    "arn:aws:iam::<account1-id>:role/cross-account-file-share-role"
                ]
            },
            "Action": [
                "s3:Get*",
                "s3:Put*",
                "s3:List*",
                "s3:AbortMultipartUpload",
                "s3:Delete*"
            ],
            "Resource": [
                "arn:aws:s3:::destination-bucket",
                "arn:aws:s3:::destination-bucket/*"
            ]
        }
    ]
}

In account1 cross-account-file-share-role trust relalationship looks like this -

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Service": [
                    "lambda.amazonaws.com",
                    "ec2.amazonaws.com",
                    "storagegateway.amazonaws.com",
                    "s3.amazonaws.com"
                ],
                "AWS": "arn:aws:iam::<account1-id>:role/FullResourceAccessforEC2"
            },
            "Action": "sts:AssumeRole"
        }
    ]
}

I also tried creating a inline policy for my lambda in addition to AWSLambdaBasicExecutionRole as suggested here -

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::account1-source-bucket/*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:PutObject",
                "s3:PutObjectAcl"
            ],
            "Resource": "arn:aws:s3:::account2-destination-bucket/*"
        }
    ]
}

Below is the lambda I wrote in account1 -

import json
import boto3
import urllib

TARGET_BUCKET = 'account2-destination-bucket'

def lambda_handler(event, context):
    
    # Get incoming bucket and key
    source_bucket = json.loads(event['Records'][0]['Sns']['Message'])['Records'][0]['s3']['bucket']['name']
    source_key = urllib.parse.unquote_plus(json.loads(event['Records'][0]['Sns']['Message'])['Records'][0]['s3']['object']['key'])
    
    print("source_bucket :", source_bucket)
    print("source_key :", source_key)
    

    # Copy object to different bucket
    s3_resource = boto3.resource('s3')
    copy_source = {
        'Bucket': source_bucket,
        'Key': source_key
    }
    target_key = source_key 

    s3_resource.Bucket(TARGET_BUCKET).Object(target_key).copy(copy_source, ExtraArgs={'ACL': 'bucket-owner-full-control'})

The lambda has a SNS trigger which gets triggered whenever a new file is uploaded to account1-source-bucket.
However, I'm getting below error -

{
  "errorMessage": "An error occurred (AccessDenied) when calling the CreateMultipartUpload operation: Access Denied",
  "errorType": "ClientError",
  "requestId": "d37ad461-8c9a-409b-bd53-0bc11a5c2263",
  "stackTrace": [
    "  File \"/var/task/lambda_function.py\", line 25, in lambda_handler\n    s3_resource.Bucket(TARGET_BUCKET).Object(target_key).copy(copy_source, ExtraArgs={'ACL': 'bucket-owner-full-control'})\n",
    "  File \"/var/runtime/boto3/s3/inject.py\", line 565, in object_copy\n    return self.meta.client.copy(\n",
    "  File \"/var/runtime/boto3/s3/inject.py\", line 444, in copy\n    return future.result()\n",
    "  File \"/var/runtime/s3transfer/futures.py\", line 103, in result\n    return self._coordinator.result()\n",
    "  File \"/var/runtime/s3transfer/futures.py\", line 266, in result\n    raise self._exception\n",
    "  File \"/var/runtime/s3transfer/tasks.py\", line 139, in __call__\n    return self._execute_main(kwargs)\n",
    "  File \"/var/runtime/s3transfer/tasks.py\", line 162, in _execute_main\n    return_value = self._main(**kwargs)\n",
    "  File \"/var/runtime/s3transfer/tasks.py\", line 348, in _main\n    response = client.create_multipart_upload(\n",
    "  File \"/var/runtime/botocore/client.py\", line 530, in _api_call\n    return self._make_api_call(operation_name, kwargs)\n",
    "  File \"/var/runtime/botocore/client.py\", line 960, in _make_api_call\n    raise error_class(parsed_response, operation_name)\n"
  ]
}

Please help how to overcome this situation.

djm
  • 45
  • 6
  • According to [CreateMultipartUpload operation - AWS policy items needed?](https://stackoverflow.com/a/45733919/174777), your IAM Role on the Lambda function should also give the `PutObject` permissions on the bucket itself (without the slash at the end), just like you have done on the Bucket Policy. – John Rotenstein Jul 08 '23 at 10:58
  • But I have `PutObject` policy in the inline policy for lambda. Do you mean in the `cross-account-file-share-role` trust relationship I need to define that? – djm Jul 08 '23 at 11:04

1 Answers1

0

Since you are performing a cross-account operation, you will require one set of credentials that are permitted to both Read from the source bucket and Write to the destination bucket.

The Lambda function should have an IAM Role (let's call it Lambda-Role) assigned to it like this (in addition to the AWSLambdaBasicExecutionRole):

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:GetObject",
                "s3:PutObject",
                "s3:PutObjectAcl"
            ],
            "Resource": [
                "arn:aws:s3:::account2-destination-bucket",
                "arn:aws:s3:::account2-destination-bucket/*"
            ]
        }
    ]
}

Note the addition of the Resource that simply refers to the bucket, as well as the contents of the bucket. (I haven't tested this, but it was suggested by CreateMultipartUpload operation - AWS policy items needed? - Stack Overflow.)

This IAM role on the Lambda function only needs a default trust policy that trusts the AWS Lambda service:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Service": "lambda.amazonaws.com"
            },
            "Action": "sts:AssumeRole"
        }
    ]
}

Then, the destination bucket requires a Bucket Policy that permits Lambda-Role to access the bucket:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::<account1-id>:role/lambda-role"
            },
            "Action": [
                "s3:Get*",
                "s3:Put*",
            ],
            "Resource": [
                "arn:aws:s3:::destination-bucket",
                "arn:aws:s3:::destination-bucket/*"
            ]
        }
    ]
}
John Rotenstein
  • 241,921
  • 22
  • 380
  • 470
  • Thanks for the response. Can't this be done via `AssumeRole` ? Since , my destination bucket policy already has this implemented. – djm Jul 08 '23 at 12:27
  • If you are going to use `AssumeRole`, then your code will actually need to call [`assume_role()`](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sts/client/assume_role.html). In this case, it should assume a role with the first policy shown above, and that role name should be the same role referenced in the Bucket Policy. When calling `assume_role()`, you will be returned a set of credentials that you should use to create another S3 client. That's what happens with the IAM Role assigned to the Lambda function, so it's somewhat simpler. – John Rotenstein Jul 08 '23 at 12:35
  • Yes I used `assume_role()` in the lambda but after getting the credentials like `AccessKeyId`, `SecretAccessKey` and `SessionToken` I'm unable to do further operations. So for instance if I want to `list_buckets` it simply returns me Timed out message. Not sure why. – djm Jul 08 '23 at 12:40
  • The code that you provided does not include a call to `assume_role()`. To see examples of how to use it, see: [AWS: Boto3: AssumeRole example which includes role usage](https://stackoverflow.com/q/44171849/174777) Frankly, I would recommend simply using the Lambda Role (as described in my answer) rather than going through the extra hassle of calling `assume_role()`. – John Rotenstein Jul 08 '23 at 12:46
  • The fact that you are receiving a "Timed out message" suggests that either the object being copied took longer than the default 3 seconds (in which case you should simply increase the Timeout duration on the Lambda function), or it means that the Lambda function is unable to connect to S3. Did you connect the Lambda function to a VPC? If so, is there a reason you did this? If you do _NOT_ connect it to a VPC, it will automatically receive access to the Internet. If you _DO_ connect it to a VPC then you will also need to provision a NAT Gateway (extra charges apply). – John Rotenstein Jul 08 '23 at 12:47
  • No I don't use VPC. Everything is handled via the IAM role permissions itself. I also tried using `config = BotoConfig(connect_timeout=10, retries={"mode": "standard"})` – djm Jul 08 '23 at 12:59
  • Can you help me tweak the lambda according to `assume_role()`? – djm Jul 08 '23 at 13:09
  • See the linked examples for code that uses AssumeRole. But, again, I recommend that you don't do it that way unless you have a particular reason. – John Rotenstein Jul 08 '23 at 13:34
  • Actually I tried with all the examples but everytime it throws task timed out. That's why seeking your help for solving my use case. – djm Jul 09 '23 at 05:50