2

I have Azure Data Factory copy activity which loads parquet files to Azure Synapse. Sink is configured as shown below:

enter image description here

After data loading completed I had a staging table structure like this:

enter image description here

Then I create temp table based on stg one and it has been working fine until today when new created tables suddenly received nvarchar(max) type instead of nvarchar(4000):

enter image description here

Temp table creation now is failed with obvious error: Column 'currency_abbreviation' has a data type that cannot participate in a columnstore index.'

Why the AutoCreate table definition has changed and how can I return it to the "normal" behavior without nvarchar(max) columns?

Alex
  • 117
  • 11

1 Answers1

3

I've got exactly the same problem! I'm using a data factory to read csv-files into my Azure datawarehouse and this used to result in nvarchar(4000) columns, but now they are all nvarchar(max). I also get the error Column xxx has a data type that cannot participate in a columnstore index.

My solution for now is to change my SQL code and use a CAST to change the formats, but there must be a setting in the data factory to get the former results back...

Lalage
  • 51
  • 3
  • In the end I settled on the next solution: 1) to create an empty copy of the autotable, but with nvarchar(4000) fields, 2) copy from "with max" to "with 4000", 3) rename "with max" to some _old_name, "with 4000" to origin "with max" name 4) drop _old_name It works fine, the one drawback is initial run - it takes way longer in order to copy all tables, but next run will use newly created tables with correct structure – Alex Feb 17 '21 at 12:06