I am trying to use boto3 in Python3 to copy a file from one S3 bucket to another. I have seen the following,
- Move files between two AWS S3 buckets using boto3
- boto3 Docs
- aws s3 boto3 copy()
- How to write a file or data to an S3 object using boto3
My code is as follows.
import boto3
bucket_old = "bold" # old bucket name
key_old = "/k/old" # old file key
bucket_new = "bnew" # new bucket name
key_new = "/k/new" # new file key
s3 = boto3.resource('s3')
copy_source = {
'Bucket': bucket_old,
'Key': key_old
}
copy_source = {
'Bucket': bucket_old,
'Key': key_old
}
print(copy_source)
print(bucket_new+key_new)
response = s3.meta.client.copy(CopySource=copy_source, Bucket=bucket_new, Key=key_new)
print(response)
print("done")
As I understand it, this is exactly what the docs suggest. I have tried it with and without the arg names in the copy
command.
Both the print(bucket_new+key_new)
and print("done")
lines execute as expected, and the program completes successfully. Unfortunately, however, the file does not appear in the new location. It would appear that the copy
itself is silently failing. Additionally, response
returns None
. What could be causing these?
I can successfully list all buckets with
for bucket in s3.buckets.all():
print(bucket.name)
This suggests that I have successfully authenticated onto the AWS account.
What am I missing here? Is s3.meta.client.copy
the right approach, or should I use copy_object
? What is the difference between these?
Thanks!