0

I work on a piece of software that has many tables, views, and stored procedures. Currently, to make it easy for developers to run all of the latest updates on their local databases and for ease of deployment of the software, we have a large Update.sql file. This creates tables and stored procedures that don't exists and adds/updates/removes data that needs to change. It is designed to be run over and over again without messing up someones database and only apply the changes that are needed. This is very convenient for the developers and for deployment.

However, I would really love to be able to split all of the database objects (tables, functions, stored procedures, back-fills/data updates) into separate scripts in source control. This would allow us to track changes to individual database objects instead of just one large SQL file.

Is there a good way to get the best of both worlds? Perhaps a free tool that can run all SQL files in a folder and all of its sub-folders? Or some batch script that can merge all of the individual files together into a single file after every check-in?

EDIT 10/27/2017: After reviewing some of the links that the answers have shared, I think this question comes down to finding a way to take the best parts of State based VS Migration based database update management. Here is an article that I think breaks down the differences and pros/cons pretty well, but I'll summarize the parts that I am focused on below

STATE BASED: This is what is used by Visual Studio SQL Server Projects. It is a snapshot of what the database should look like at the current version. Updates to servers are created by comparing the database to this snapshot and auto-generating scripts that will alter tables/views/SPs/etc. to be what they need to be.

Pros:

  • Version Control: Each database objects (table, stored procedure, etc.) is a separate script file. This makes tracking changes made to those objects over time very manageable because you can just view the source control history.
  • Compilation: If you are using Visual Studio SQL Server Projects, you can actually compile them and they will tell you if your references are all good. For instance, if you drop a column in the table and there was a stored procedure that references that column, this will tell you that the SP references a column that no longer exists so you can fix it.
  • Simple Deployment: You can use these projects that have hundreds of individual database object scripts and have it update a database either in Visual Studio using Publish or by compiling it and taking the DacPac that it made to SQL and updating it that way. So even though there are a bunch of individual files, after compiling it just comes down to one file that you work with in the end.

Cons:

  • Updating data: In the real-world, State-based updates often aren't viable. For example, let's say your Contacts table used to have a Full Name column. In version 2, you decide to split this into First Name and Last Name and drop the Full Name column. Normally you would write scripts to add the new columns, convert the data, and then drop the old column. However, state-based doesn't work that way, it will just drop the column and add the new ones, but not do anything to convert the data.

MIGRATION BASED: This is pretty much what we are currently doing, except in one really big file instead of several small files. You start with a base-line (which might be an empty database), and then you write one or more files that then alter that base-line to get it to the current version. For instance, Version1.sql might create the Contacts table with the Full Name column, then Version2.sql could create the First Name/Last Name columns, move the data, and then drop the old column. You can either use tools that only runs each script once in the right order or you can do what we've been doing and have a big script that has logic in it to know what things have been run and which haven't and only do what needs to be done.

Pros and Cons: This is basically the reverse of State-based. It gives you a lot of flexibility on how you create your scripts and the power to use real-world logic to update your database the way it needs to be instead of letting it automatically create drop/alter/insert/etc. scripts itself. Much like State-based, as long as you have the right tools, it is easy to deploy. However, it usually isn't very easy to track changes made to database objects overtime. If I want to see the full history of changes to a particular table, who did it, and when, there's not really an easy way to do this, because there is not a single file representing that database object with a Source Control history. Also, I haven't seen any tools that can take a Migration-based strategy and compile it to show you if the changes made have any reference issues.

SO, MY QUESTION IS: How can I keep the power, flexibility, and ease of use of Migration-based that we are currently using, but also get the best parts of State-based (Version Control and Compilation to check dependencies)? I'm up for some hybrid solution as long as it doesn't mean that my developers have to manage two things (like write a Migration script, but also don't forget to update the SQL project so we can track the history). If I could automate a SQL Project to update the database object scripts based on the migration that would be cool, but it would need to know who made the changes that caused the update and preferably what changeset it happened in.

Thoughts?

Nosfera2
  • 945
  • 7
  • 7

2 Answers2

0

With sql server mamagement studio you can generate scripts to recreate the db - you could do tha and put those in your source versionning system.

Use "Tasks" , "Generate Scripts" and click through the options. You can use single objects to file.

As for Data ... I think there is some kind of checkbox to export the data as well - not sure though.

f.e. here: Want to create a script to export Data and tables and views to a sql script

Patrick Artner
  • 50,409
  • 9
  • 43
  • 69
  • True, but my problem is not generating the scripts. I can easily take our large Update.sql file and chop it up into smaller scripts so I can manage them in Source Control as separate files. My problem is, once I do that, it is a monstrous pain for my developers (and when deploying the software) to run those scripts on a database. When they get latest, what scripts should they run? Right now, they just need to run the Update.sql and they are good to go. And there are too many for them to run hundreds of individual SQL files at a time. – Nosfera2 Oct 25 '17 at 20:55
  • SSDT: the scripting, creates ssdt-projects in your solutions. Have a look here: http://thedatabaseavenger.com/2016/08/visual-studio-basics-for-use-with-readyroll/ for creation of migration scripts. Disclaimer: did not use it readyroll, only used SSDT for scripting on 2008r2 and Redgate Sql Compare to migrate schemas. – Patrick Artner Oct 25 '17 at 21:41
0

I'm not sure of a free tool, but the solution to the below seems interesting...

Run all SQL files in a directory

What I WILL say about that is there are no transactions, so if one of your .sql scripts breaks, it is not going to roll back all of your creations. Other than that though, this should work fine.

calamities
  • 78
  • 3