I am trying to setup an ETL pipeline where
- Source is a SQL Server table's column in binary stream form
- Destination (sink) is s3 bucket
My requirements are:
- To read binary stream column from SQL Server table
- Process the binary stream data row by row
- Upload file to a S3 bucket for each binary stream data
I have tried DataFlow, Copy, AWS Connectors on Azure Data Factory, but there is no option to set s3 bucket as destination (sink)
Is there any other approach available in Azure Data Factory to match these requirements?