1

Several questions and answers on SO and elsewhere have outlined possible solutions for resolving the SignatureDoesNotMatch error thrown when calling 'generate_presigned_url' from the boto3 SDK. Few are in boto3 and most answers suggest getting new credentials to resolve this exception. You can see more (but this is in PHP) here.

But these do not work for me, because I am using correct credentials and the correct bucket name and key path.

Originally, I was calling this to generate my client and then call generate_presigned_url.

client_s3 = boto3.client(
    's3',
    # Hard coded strings as credentials, not recommended.
    aws_access_key_id='XXX',
    aws_secret_access_key='XXX',
    region_name='us-east-2',
    # EDIT: Previously, I used signature_version='v4' here, but as a user here pointed out, this might not work. Regardless, I tried 's3v4' prior to trying 'v4' and neither worked for me.
    config=Config(signature_version='s3v4')
)

url = client_s3.generate_presigned_url(
    ClientMethod='get_object',
    Params={
        'Bucket': 'BUCKET_NAME',
        'Key': 'CORRECT_KEY'
    }
)

What could create this error when all of the parameters used are seemingly correct? How do I resolve it?

Joshua Wolff
  • 2,687
  • 1
  • 25
  • 42

3 Answers3

6

It is clearly mentioned in the documentation of boto3 that the option should look like config=Config(signature_version='s3v4'). v4 wouldn't work.

This is an example of boto3 documentation.

import boto3
from botocore.client import Config

# Get the service client with sigv4 configured
s3 = boto3.client('s3', config=Config(signature_version='s3v4'))

# Generate the URL to get 'key-name' from 'bucket-name'
url = s3.generate_presigned_url(
    ClientMethod='get_object',
    Params={
        'Bucket': 'bucket-name',
        'Key': 'key-name'
    }
)

Btw, us-east-2 only allow signature verion 4 and so you don't need to specify that. See This.

Lamanus
  • 12,898
  • 4
  • 21
  • 47
  • Hi, I used that as well, and later changed it to v4 after I saw another user tried 'v4' instead. Both did not work for me. Thank you for your answer. – Joshua Wolff Sep 16 '19 at 05:28
  • your original region don't need to specify the signature version, cuz v4 is default and only option for that. – Lamanus Sep 16 '19 at 05:52
  • Yup, it does not work without specification of the signature version either. – Joshua Wolff Sep 16 '19 at 05:57
  • Then, please check your bucket location by dong `client_s3.get_bucket_location(Bucket='string')`. – Lamanus Sep 16 '19 at 06:03
  • Furthermore, try to use the config `Config(s3={'addressing_style': 'path'})`. What version of boto3 are u using? – Lamanus Sep 16 '19 at 06:05
  • The bucket is definitely in us-east-2, and the most recent version of boto3. I just don't want to debug the problem because it has been solved, but thank you and +1 for your help. – Joshua Wolff Sep 18 '19 at 07:03
1

I know this is a bit late to answer, but I solved my problem by following the method as specified in this GitHub link.

import boto3
import requests

parts = s3_client.generate_presigned_post(Bucket=bucket_name,
                                          Key=key,
                                          Fields={
                                            'acl': 'public-read',
                                            'Content-MD5': str(md5),
                                            'Content-Type': 'binary/octet-stream'
                                            },
                                          Conditions=[
                                              {"acl": "public-read"},
                                              ["starts-with", "$Content-Type", ""],
                                              ["starts-with", "$Content-MD5", ""]
                                          ]
                                      )

url = parts['url']
data = parts['fields']
files = {'file': open(local_path, 'rb')} # the key supposed to be file may be
response = requests.post(url, data=data, files=files)
Utkarsh Sharma
  • 323
  • 4
  • 14
-3

After seeing this AWS forum, I figured something fishy might be going on, but I really just wanted to resolve the issue with a secure solution.

My resolution is definitely not optimal for everyone, but it worked for me.

I copied everything from my bucket in 'us-east-2' into a new bucket in 'us-east-1' and I was able to access that bucket correctly with the exact same access/secret keys and bucket/key paths. I simply used:

client_s3 = boto3.client(
    's3',
    # Hard coded strings as credentials, not recommended.
    aws_access_key_id='XXX',
    aws_secret_access_key='XXX'
)

If you're like me and don't want to spend hours trying to decipher AWS's poor docs, just do this if you can. If you have a real solution, please add it here.

I still am unsure what causes this, but likely has something to do with the 'v4' signing method, which is region-dependent.

Joshua Wolff
  • 2,687
  • 1
  • 25
  • 42
  • To the two downvoters --- easy to downvote, but realize first I posted the question and answer before any existed for the purpose of helping. My answer, while suboptimal, was the *only* answer at the time. If you have feedback, post it here. – Joshua Wolff Feb 03 '22 at 21:08