0

I have to connect my AWS Lambda to AWS S3 bucket in order to get custom pickle object. I was able to do that following this AWS resource, and this works fine.

However, I need to explicitly put AWS S3 bucket URL inside AWS lambda in order to import large Python package xgboost into temp directory in AWS Lambda container (like this and this). I was able to make this work, however to do this I need to grant my S3 bucket public access.

My question is whether it is possible to explicitly state AWS S3 URL within AWS Lambda without needing to allow public access to S3 bucket?

Makaroni
  • 880
  • 3
  • 15
  • 34
  • 1
    how about packaging the lambda inside a container [New for AWS Lambda – Container Image Support](https://aws.amazon.com/blogs/aws/new-for-aws-lambda-container-image-support/). [here is an example](https://github.com/alexeybutyrev/aws_lambda_xgboost) – samtoddler Mar 20 '21 at 15:21
  • 1
    You seem to be getting one object from S3 successfully (the pickle object) but not the other (the xgboost package). What is the difference in how you are trying to get these objects? – jarmod Mar 20 '21 at 15:37
  • @jarmod The difference is that the accessible object is `pickle` file which I load using `pickle.load`, and `boto3` and it's `getObject` method. The unaccessible file is actually linux `xgboost` `.zip` library which I want to access through explicit URL. – Makaroni Mar 20 '21 at 15:49

1 Answers1

0

I found the answer in defining presigned URL using boto3 (more details here):

import boto3

s3_client = boto3.client('s3')

BUCKET = 'my-bucket'
OBJECT = 'xgboost.zip'

url = s3_client.generate_presigned_url(
    'get_object',
    Params={'Bucket': BUCKET, 'Key': OBJECT},
    ExpiresIn=300)

Using this, I can block all public access for my S3 bucket.

Makaroni
  • 880
  • 3
  • 15
  • 34