I am excited to make the leap to using DAC Fx and declarative database development. The major obstacle for me is how to handle complex data migrations across several different versions of the schema. In the old world, we can simply run all of our upgrade scripts in order, which guarantees that the schema is in the correct state at the time of data migration. How does this work when the upgrade path is dynamic?
For example, suppose there are several versions of my schema (DACPAC1-4) on existing instances:
- DACPAC1: tableA exists and has valuable client data
- DACPAC2: tableA is deprecated and replaced by tableB and normalized tableC; adds new
tableD
- post-deploy script: move data from tableA to new tableB and tableC; drop tableA
- DACPAC3: tableC has a new, nullable columnX
- post-deploy script: populates nullable column based on tableD
- DACPAC4: tableC.columnX is non-nullable
If I need to be able to support upgrading DACPAC1-3 servers to the latest DACPAC4, I now have to write my pre- and post- deploy scripts in a clever enough way to detect which DACPAC is currently on the target and correctly handle data migration steps in order. Furthermore, I cannot simply re-use the naive post-deploy scripts I originally wrote, since they depend on intermediate versions of the schema.
Thanks in advance for any advice!