3

This is my code (it's purpose is to resize the images which are uploading without changing their bucket):

import boto3
import os
import pathlib
from io import BytesIO
from PIL import Image


s3 = boto3.resource('s3')

def delete_this_bucket(name):
    bucket = s3.Bucket(name)
    for key in bucket.objects.all():
        try:
            key.delete()
            bucket.delete()
        except Exception as e:
            print("SOMETHING IS BROKEN !!")

def create_this_bucket(name, location):
    try:
        s3.create_bucket(
            Bucket=name,
            CreateBucketConfiguration={
                'LocationConstraint': location
            }
        )
    except Exception as e:
        print(e)

def upload_test_images(name):
    for each in os.listdir('./testimage'):
        try:
            file = os.path.abspath(each)
            s3.Bucket(name).upload_file(file, each)
        except Exception as e:
            print(e)

def copy_to_other_bucket(src, des, key):
    try:
        copy_source = {
            'Bucket': src,
            'Key': key
        }
        bucket = s3.Bucket(des)
        bucket.copy(copy_source, key)
    except Exception as e:
        print(e)


def resize_image(src_bucket, des_bucket):
    size = 600, 400
    bucket = s3.Bucket(src_bucket)
    in_mem_file = BytesIO()
    client = boto3.client('s3')

    for obj in bucket.objects.all():
        file_byte_string = client.get_object(Bucket=src_bucket, Key=obj.key)['Body'].read()
        im = Image.open(BytesIO(file_byte_string))

        im.thumbnail(size, Image.ANTIALIAS)
        # ISSUE : https://stackoverflow.com/questions/4228530/pil-thumbnail-is-rotating-my-image
        im.save(in_mem_file, format=im.format)
        in_mem_file.seek(0)

        response = client.put_object(
            Body=in_mem_file,
            Bucket=des_bucket,
            Key='resized_' + obj.key
        )

def lambda_handler(event, context):
    bucket = s3.Bucket('bucketname')

    for obj in bucket.objects.all():
        copy_to_other_bucket(bucket, 'bucketname', obj.key)

    resize_image(bucket.name, 'bucketname')


    print(bucket)

But i have uploaded it as a Zip file (13mb) following this guide [Document][1]

the log output I'm getting when I test it:

START RequestId: ec37dbf0-f2d4-4e31-9119-aef3ea895bf8 Version: $LATEST
END RequestId: ec37dbf0-f2d4-4e31-9119-aef3ea895bf8
REPORT RequestId: ec37dbf0-f2d4-4e31-9119-aef3ea895bf8  Duration: 3003.81 ms    Billed Duration: 3000 ms    Memory Size: 128 MB Max Memory Used: 39 MB  
2020-10-22T19:03:16.518Z ec37dbf0-f2d4-4e31-9119-aef3ea895bf8 Task timed out after 3.00 seconds

I tried changing the timeout settings but it still didn't help. Does anyone have any ideas why this error occurs?

UPDATE

I fixed that but now the problem stands with the ssl verification the whole output is:

"errorMessage": "SSL validation failed for https://Website1.s3.eu-west-3.amazonaws.com/?encoding-type=url [Errno 2] No such file or directory",
  "errorType": "SSLError",
  "stackTrace": [
    "  File \"/var/task/lambda_function.py\", line 74, in lambda_handler\n    for obj in bucket.objects.all():\n",
    "  File \"/var/task/boto3/resources/collection.py\", line 83, in __iter__\n    for page in self.pages():\n",
    "  File \"/var/task/boto3/resources/collection.py\", line 166, in pages\n    for page in pages:\n",
    "  File \"/var/task/botocore/paginate.py\", line 255, in __iter__\n    response = self._make_request(current_kwargs)\n",

Can you help me with this? [1]: https://docs.aws.amazon.com/lambda/latest/dg/with-s3-example-deployment-pkg.html#with-s3-example-deployment-pkg-python

E.K
  • 57
  • 6
  • 1
    Is this Lambda function deployed into VPC? – jarmod Oct 22 '20 at 19:49
  • No it is not, could that be a factor? – E.K Oct 22 '20 at 19:52
  • 1
    try increasing execution timeout of lambda to 5 minutes. – Avinash Dalvi Oct 22 '20 at 20:00
  • 3
    It could have been, if the Lambda were in a VPC - it's a common mistake to provide no network route in this case. But if you're not in VPC then that's not your problem. You should limit your code to something very basic (e.g. list an S3 bucket that has just a few objects). Does that work? If yes, then you have network connectivity and you can successfully make AWS API calls (and you have AWS credentials). You should also add logging to your Lambda code. I'm assuming you're simply exceeding your timeout. Copying objects and resizing images takes time. Increase the Lambda RAM size and timeout. – jarmod Oct 22 '20 at 20:01
  • 1
    Guess I should have set it to 5 minutes right away. It's fixed now. Thank you for your support – E.K Oct 22 '20 at 20:12
  • 1
    you can set listener as SNS to get email notification whenever time out happen so you can scale timeout whenever it required – Avinash Dalvi Oct 22 '20 at 20:16

0 Answers0