3

I am new to ADF. I have a requirement to load the data from 15 CSV files to 15 Azure Sql database tables. In the pipeline there is a trigger to run the pipeline every time a blob is created.

I would like to make this pipeline dynamic. My CSV file name contains the tablename. Example, Input_202005 is the csv and the table name is Input.

Similarly, I have 14 other files/tables whose metadata is different.

Because I want to run the pipeline every time there a blob is created, I do not need a metadata and a foreachfile activity. I want the pipelines to run in parallel for each blob. Is there a way to know which blob/file triggered the pipeline and to get the name of the file without using any parameters in the trigger. I do not want to use 15 trigger parameters.

Or is there a better solution for my requirement? Any suggestions are appreciated.

MBK
  • 55
  • 1
  • 7

1 Answers1

8

Add a parameter to your pipeline, say, triggeringFile.

When you create the trigger, a form pops-out on the right side - after submitting the first page, a second page pops-out - this will ask for a value for the pipeline parameter triggeringFile. In that box, put @trigger().outputs.body.fileName

If the format you gave is the standard then your table name is just @{split(pipeline().parameters.triggeringFile,'_')[0]}

Jason Welch
  • 896
  • 5
  • 9