Suppose I have the following table in my SQL Server (2012) database:
MyTable:
Col1: Col2: Col3: Col4: VersionNum:
--------------------------------------------------
Val11 Val21 Val31 Val41 1
Val12 Val22 Val32 Val42 1
Val13 Val23 Val33 Val43 1
...
And I have the following data (say about 20000 records) that I'd like to merge with my current table:
New Data:
Col1: Col2: Col3: Col4:
------------------------------------
Val11 Val21 Val31 Val41
Val12a Val22 Val32 Val42
So, the first value is an exact match to the first row, whereas the second value has at least one element different
What I would like to have my table end up looking like is:
MyTable:
Col1: Col2: Col3: Col4: VersionNum:
---------------------------------------------------
Val11 Val21 Val31 Val41 1
Val12 Val22 Val32 Val42 1
Val13 Val23 Val33 Val43 1
Val12a Val22 Val32 Val42 2
Or, putting it into words:
- If all data elements match, then don't add in a new row
- If any one or more data elements are different, then add in a new row with an updated version number.
I found this question that seems to deal with a similar such issue, but only for a primary key difference. What I'm wondering is what would be the best / most efficient way to do this given such a large dataset to use for the merge? Or, if there are any better patterns (an archive table or something similar, PLEASE do share - This is all very new to me and I'd like to do things as cleanly and efficiently as possible).