I have an AWS Lambda function which queries API and creates a dataframe, I want to write this file to an S3 bucket, I am using:
import pandas as pd
import s3fs
df.to_csv('s3.console.aws.amazon.com/s3/buckets/info/test.csv', index=False)
I am getting an error:
No such file or directory: 's3.console.aws.amazon.com/s3/buckets/info/test.csv'
But that directory exists, because I am reading files from there. What is the problem here?
I've read the previous files like this:
s3_client = boto3.client('s3')
s3_client.download_file('info', 'secrets.json', '/tmp/secrets.json')
How can I upload the whole dataframe to an S3 bucket?