2 answers to your questions:
- How to read content of a file from a folder in S3 bucket using python?
- How to check if the report is present and return a boolean value ?
Get S3-object
S3-object as bytes
s3_client = boto3.client('s3')
response = s3_client.get_object(Bucket=S3_BUCKET_NAME, Prefix=PREFIX, Key=KEY)
bytes = response['Body'].read() # returns bytes since Python 3.6+
NOTE: For Python 3.6+ read()
returns bytes. So if you want to get a string out of it, you must use .decode(charset)
on it:
pythonObject = json.loads(obj['Body'].read().decode('utf-8'))
S3-object as string
See Open S3 object as a string with Boto3.
Check if S3-object is present
For example to check the availability of the report as S3.Object
just retrieve it and test on the key
attribute:
import boto3
import json
S3_BUCKET_NAME = ''
KEY = 'fee_summary/fee_summary_report.json'
def send_fee_summary_notification():
fee_summary_report = get_fee_summary_report()
print(fee_summary_report)
def get_fee_summary_report():
s3_client = boto3.client('s3')
response = s3_client.get_object(Bucket=S3_BUCKET_NAME, Prefix=PREFIX, Key=KEY)
data = response['Body'].read()
fee_summary_report = json.loads(data)
return fee_summary_report
def has_fee_summary_report():
s3 = boto3.client('s3')
obj = s3.Object(S3_BUCKET_NAME, KEY).get() # define object with KEY (report) and get
return obj.key != None # returns False if not found
Use paging to literally scan (for debugging)
You can also iterate over all objects in you bucket via paging and test, if the desired report (with specified KEY) exists:
for page in s3.Bucket('boto3').objects.pages():
for obj in page:
print(obj.key) # debug print
if obj.key == KEY:
return True
return False