I small app that uploads data from CSV file to the cloud. This app runs at on several servers and each of these servers produces such a file.
My app incrementally import data from this CSV file into a datatable and using SqlBulkCopy bulk load it to Azure SQL.
Currently the table in Azure SQL matches the CSV file layout.
Azure SQL Table
CREATE TABLE stg.EventLog(
EventLogID BIGINT PRIMARY KEY,
EventLogDate DATETIME NOT NULL,
EventName VARCHAR(255) NOT NULL,
EventType VARCHAR(50) NOT NULL
)
However one server wants to modify their CSV file to contain a additional field. So I have to create the additional column and ensure it can accept a NULL value.
Update to the table will then be:
ALTER TABLE stg.EventLog ADD Comments VARCHAR(1024) NULL
My apps original sync code looks as follow:
bc = New SqlBulkCopy(cnn)
bc.DestinationTableName = "dbo.EventLog"
bc.BatchSize = 10000
bc.ColumnMappings.Add("LogID", "LogID")
bc.ColumnMappings.Add("LogDate", "LogDate")
bc.ColumnMappings.Add("EventName", "EventName")
bc.ColumnMappings.Add("EventType", "EventType")
bc.WriteToServer(dt)
Now here is the weird tricky part. I cant update the app on the other servers until end of Feb 2021 due to corporate red tape. But I have to update the server with the change in CSV file structure ASAP. This means I need to add the additional column mapping to the app, build and deploy it.
So my question is: Based on the .NET code above, will it still work on the other servers if the Destination Table contains a NULLable field that is not mapped with SqlBulkCopy?
Please let me know if more info is needed.