0

Context: I have an EC2, and and S3 bucket. The EC2 is running a webserver that provides a Pre-Signed URL through a GET Request. The client requesting the pre-signed URL form the Webserver is to upload the file to the S3 bucket. Based on the tutorials online, I created an IAM role with full S3 access and added that to my EC2. With AWS CLI I am able to list the bucket.

Problem: The external client can reach this REST endpoint and get the pre-signed URL and the fields associated with it. But not able to upload the file.

Here is the server side code:

import json
import boto3
import pydantic
from botocore.exceptions import ClientError
from botocore.client import Config
import datetime


URL = "http://169.254.169.254/latest/meta-data/iam/security-credentials/my-iam-role-s3"
access_key = ""
secret_key = ""
bucket_name = "my-bucket-name-s3"

def get_aws_creds():
    global access_key
    global secret_key
    try:
        response = requests.get(URL)
        response.raise_for_status()
        json_response = json.loads(response.text)
        access_key = json_response["AccessKeyId"]
        secret_key = json_response["SecretAccessKey"]
    except requests.exceptions.HTTPError as err:
        print(err)
        raise SystemExit(err)
    return access_key, secret_key


def get_pre_signed_upload_URL(uuid :str):
    global access_key
    global secret_key

    if (access_key == "" or secret_key == ""):
        access_key, secret_key = get_aws_creds()
    s3_client = boto3.client(
        's3',
        aws_access_key_id=access_key,
        aws_secret_access_key=secret_key,
        region_name="us-west-1",
        config=Config(signature_version='s3v4'))
    try:
        response = s3_client.generate_presigned_post(
            Bucket=bucket_name,
            Key=uuid,
            ExpiresIn=50
             )
    except ClientError as e:
        return None
    return response

The client calling the REST endpoint on EC2 is able to receive the following (information redacted):

{
    'url': 'https://bucket-name.s3.amazonaws.com/',
    'fields': {
        'key': 'some-uuid',
        'x-amz-algorithm': 'AWS4-HMAC-SHA256',
        'x-amz-credential': 'ABCDEFGH/20230628/us-west-1/s3/aws4_request',
        'x-amz-date': '20230628T202210Z',
        'policy': 'POLICY-BLOB',
        'x-amz-signature': 'signature-hash'
    }
}

I use the following to perform a curl command on my computer - I've already received the URL and Fields in upload_object:

curl_upload_command = f'curl -X POST {upload_object["url"]}'
for key, value in upload_object['fields'].items():
    curl_upload_command += f' -F "{key}={value}"'
curl_upload_command += f' -F "file=@{post_body["filename"]}"'

# Upload the file using curl
print(curl_upload_command)
upload_result = subprocess.run(curl_upload_command, shell=True, capture_output=True, text=True)

print(upload_result.stdout)

The response I get is as follows:

<Error><Code>InvalidAccessKeyId</Code><Message>The AWS Access Key Id you provided does not exist in our records.</Message><AWSAccessKeyId>ASIA****************</AWSAccessKeyId><RequestId>W8GW0J0PE0BWK5RZ</RequestId><HostId>HOST-ID******</HostId></Error>

Any guidance here will greatly be appreciated.

Note: EC2 with IAM role is generating the pre-signed URL and a public client is trying to POST a file with it.

John Rotenstein
  • 241,921
  • 22
  • 380
  • 470
Prateek Khatri
  • 139
  • 2
  • 11
  • Assuming that the IAM role that your EC2 instance was launched with allows the upload to S3 then you shouldn't need any code dealing with AWS credentials. Just create a client using `boto3.client('s3')` (add the region and s3v4 options if necessary) and then use that to create the pre-signed URL. The boto3 SDK gets the credentials from the metadata service without you having to do it explicitly. – jarmod Jun 28 '23 at 22:38
  • I have S3 Full access on the IAM role and it is added to the EC2 which is generating the pre-signed URL. – Prateek Khatri Jun 28 '23 at 22:49
  • OK, so do as I suggested earlier. There is no need for you to retrieve credentials from the metadata service explicitly or to provide them to the client constructor (incorrectly, as it happens, which is the cause of your original error). – jarmod Jun 29 '23 at 01:45
  • Yes, thank you, i've changed to use the `sts` client to get the credentials by calling sts_client.assume_role(...) Ref: https://stackoverflow.com/questions/60985001/boto3-invalidaccesskeyid-in-generate-presigned-post Now just need to figure out policy configuration for IAM role. – Prateek Khatri Jun 29 '23 at 16:19

1 Answers1

2

An IAM Role does not have a permanent Access Key + Security Key, so whatever you are passing to that function is invalid.

From GetAccessKeyInfo - AWS Security Token Service: "Access key IDs beginning with ASIA are temporary credentials that are created using AWS STS operations."

Instead, your program should call AssumeRole() while passing the IAM Role ARN. This will return a temporary Access Key, Secret Key and Security Token. These values can then be used to generate the pre-signed URL.

Note that the pre-signed URL will only be valid for the duration that the Assumed Role is valid (which defaults to 60 minutes).

Also, the program will need to use a set of AWS credentials to call AssumeRole(). These can be the automatic credentials generated by the IAM Role assigned to the EC2 instance.

John Rotenstein
  • 241,921
  • 22
  • 380
  • 470
  • Thanks John, I added code to assume role however I am running into this error: AssumeRole operation: User: arn:aws:sts::123456789:assumed-role/ROLE_NAME/i-0570ae4aa2b9d547d is not authorized to perform: sts:AssumeRole on resource: arn:aws:iam::123456789:role/ROLE_NAME Any ideas about this? – Prateek Khatri Jun 28 '23 at 22:47
  • 1
    As mentioned, the program will need to use a set of AWS credentials to call `AssumeRole()`. It would appear that it is using the IAM Role assigned to the Amazon EC2 instance. You will need to add permissions to that role to permit it to call `AssumeRole()` on the IAM Role that you are wanting to use. – John Rotenstein Jun 28 '23 at 23:10
  • Hi John, I've tried many iterations of policies on the IAM Role that I am trying to assume on my EC2 - however, I am stumped with the `AccessDenied` message. Do you know if there is an example policy I could use for making successful call to `assume_role` ? – Prateek Khatri Jun 29 '23 at 21:30
  • Assuming that you want to assign an IAM Role (`Role-A`) to the EC2 instance and then software on the instance wants to use that `Role-A` to assume `Role-B`, then `Role-A` should have `iam:AssumeRole` permission to assume `Role-B` _AND_ they Trust Policy on `Role-B` should reference `Role-A`. See: [AWS IAM: Allowing a Role to Assume Another Role - Nelson Figueroa](https://nelson.cloud/aws-iam-allowing-a-role-to-assume-another-role/) – John Rotenstein Jun 29 '23 at 22:47
  • Thanks John, the trust policy to assume was missing. { "Sid": "Statement1", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::ACCOUNTID:role/ROLE-NAME" }, "Action": "sts:AssumeRole" } It worked!!! Thanks all. – Prateek Khatri Jun 29 '23 at 23:12