3

Here is my code for uploading the image to AWS S3:

@app.post("/post_ads")
async def create_upload_files(files: list[UploadFile] = File(description="Multiple files as UploadFile")):
    main_image_list = []
    for file in files:
          s3 = boto3.resource(
             's3',
              aws_access_key_id =   aws_access_key_id,
               aws_secret_access_key = aws_secret_access_key
                            )
           bucket = s3.Bucket(aws_bucket_name)
           bucket.upload_fileobj(file.file,file.filename,ExtraArgs={"ACL":"public-read"}) 

Is there any way to compress the image size and upload the image to a specific folder using boto3? I have this function for compressing the image, but I don't know how to integrate it into boto3.

         for file in files:
                im = Image.open(file.file)
                im = im.convert("RGB")
                im_io = BytesIO()
                im = im.save(im_io, 'JPEG', quality=50)  
                
                s3 = boto3.resource(
                                's3',
                                aws_access_key_id =   aws_access_key_id,
                                aws_secret_access_key = aws_secret_access_key
                            )
                bucket = s3.Bucket(aws_bucket_name)
                bucket.upload_fileobj(file.file,file.filename,ExtraArgs={"ACL":"public-read"})

Update #1

After following Chris's recommendation, my problem has been resolved:

Here is Chris's solution:

im_io.seek(0)
bucket.upload_fileobj(im_io,file.filename,ExtraArgs={"ACL":"public-read"})
Chris
  • 18,724
  • 6
  • 46
  • 80
hawaj
  • 296
  • 3
  • 12
  • Chris I updated my question. I fixed corrupted image issue. Right now I just want to how I can compress image before upload it aws s3 bucket? I saw the answer but didn't understood properly. – hawaj Sep 22 '22 at 11:13
  • @Chris can you please explain `bucket.upload_fileobj(im_io,...`? is it `bucket.upload_fileobj(im,...`? – hawaj Sep 22 '22 at 11:15
  • Chris I tried `bucket.upload_fileobj(im_io,..` but my image getting corrupted after uploading. I faced the similar issue before. If I remove my image compressing code then my original image uploaded without any issue – hawaj Sep 22 '22 at 11:21
  • @Chris yes exactly it's zero. please see the full line `bucket.upload_fileobj(im_io,file.filename,ExtraArgs={"ACL":"public-read", })` – hawaj Sep 22 '22 at 11:35
  • @Chris I tried but still now zero size and can't view image from url. I also updated my question – hawaj Sep 22 '22 at 11:41
  • @ Chris now image is uploading also compressing but can't view image from url. see the screenshot https://drive.google.com/file/d/1yNNSWrBsYUALjaeamfFEjsmUYaplUVux/view?usp=sharing – hawaj Sep 22 '22 at 12:21
  • @Chris bro after adding `im_io.seek(0)` image is uploading and also compressing but can't view image from url – hawaj Sep 22 '22 at 12:25

2 Answers2

1

You seem to be saving the image bytes to a BytesIO stream, which is never used, as you upload the original file object to the s3 bucket instead, as shown in this line of your code:

bucket.upload_fileobj(file.file, file.filename, ExtraArgs={"ACL":"public-read"}) 

Hence, you need to pass the BytesIO object to upload_fileobj() function, and make sure to call .seek(0) before that, in order to rewind the cursor (or "file pointer") to the start of the buffer. The reason for calling .seek(0) is that im.save() method uses the cursor to iterate through the buffer, and when it reaches the end, it does not reset the cursor to the beginning. Hence, any future read operations would start at the end of the buffer. The same applies to reading from the original file, as described in this answer—you would need to call file.file.seek(0), if the file contents were read already and you needed to read from the file again.

Example on how to load the image into BytesIO stream and use it to upload the file/image can be seen below. Please remember to properly close the UploadFile, Image and BytesIO objects, in order to release their memory (see related answer as well).

from fastapi import HTTPException
from PIL import Image
import io

# ...

try:        
    im = Image.open(file.file)
    if im.mode in ("RGBA", "P"): 
        im = im.convert("RGB")  
    buf = io.BytesIO()
    im.save(buf, 'JPEG', quality=50)
    buf.seek(0)
    bucket.upload_fileobj(buf, 'out.jpg', ExtraArgs={"ACL":"public-read"})
except Exception:
    raise HTTPException(status_code=500, detail='Something went wrong')
finally:
    file.file.close()
    buf.close()
    im.close()

As for the URL, using ExtraArgs={"ACL":"public-read"} should work as expected and make your resource (file) publicly accessible. Hence, please make sure you are accessing the correct URL.

Chris
  • 18,724
  • 6
  • 46
  • 80
0

aws s3 sync s3://your-pics. for file in "$ (find. -name "*.jpg")"; do gzip "$file"; echo "$file"; done aws s3 sync. s3://your-pics --content-encoding gzip --dryrun This will download all files in s3 bucket to the machine (or ec2 instance), compresses the image files and upload them back to s3 bucket.

This should help you.