I'm triggering an Azure function with a blob trigger event. A container sample-workitems has a file base.csv and receives a new file new.csv. I'm reading base.csv from the sample-workitems and new.csv from InputStream for the same container.
def main(myblob: func.InputStream, base: func.InputStream):
logging.info(f"Python blob trigger function processed blob \n"
f"Name: {myblob.name}\n"
f"Blob Size: {myblob.length} bytes")
logging.info(f"Base file info \n"
f"Name: {base.name}\n"
f"Blob Size: {base.length} bytes")
df_base = pd.read_csv(BytesIO(base.read()))
df_new = pd.read_csv(BytesIO(myblob.read()))
print(df_new.head())
print("prnting base dataframe")
print(df_base.head())
Output:
Python blob trigger function processed blob
Name: samples-workitems/new.csv
Blob Size: None bytes
Base file info
Name: samples-workitems/new.csv
Blob Size: None bytes
first 5 rows of df_new (cannot show data here)
prnting base dataframe
first 5 rows of df_base (cannot show data here)
Even though both files show their own content when printing but myblob.name
and base.name
has the same value new.csv which is unexpected. myblob.name
would have new.csv while base.name
would have base.csv
function.json
{
"scriptFile": "__init__.py",
"bindings": [
{
"name": "myblob",
"type": "blobTrigger",
"direction": "in",
"path": "samples-workitems/{name}",
"connection": "my_storage"
},
{
"type": "blob",
"name": "base",
"path": "samples-workitems/base.csv",
"connection": "my_storage",
"direction": "in"
}
]
}