0

Recently, we faced the major production issues because one of our team member wrote the wrong code in SQL joins.

we have 50 database in SQL Server.

Do we create the server level trigger so in case of any database table records changes(Update,Delete) we received the alert mail?

Note: I am taking about server level trigger not a individual table level trigger

  • 2
    This is called a DDL trigger. You can look it up in the documentation. – Gordon Linoff Apr 02 '18 at 10:29
  • DDL Trigger is not helping to identity the records. for example if Some delete the 1000 records this type of info i can't get in DDL trigger. –  Apr 02 '18 at 11:15
  • Well you kind of go back and forth here. Are you trying to track data changes or structure changes? Maybe you want both? You can look into CDC to track data changes. One downside is you don't get any data about who made those changes, just details about the data that changed. Regardless of the method, if you need to track data changes it is done at the table level. – Sean Lange Apr 02 '18 at 13:36
  • 1
    Your question is too vague. What, specifically, are you trying to track? Are you trying to track when changes are made or are you trying to keep a record of historical data that has been changed. – J Weezy Apr 02 '18 at 14:14
  • I don't want to track the history of data(DB Size grow). i just want to maintained how many rows affects in entire one day in each tables. for example if update statement update more then 10000 rows i got alert so i can take corrective action –  Apr 03 '18 at 04:20

1 Answers1

1

(It seems like the OP is asking for advice on which solution to implement regarding failures in production releases - sounds like there was a spectacular blow up.)

The answer is, it depends. Stack Overflow is better geared towards solving specific problems, rather than providing advice and training (i.e., provide us code that is giving you a problem). However, I will at least try to give you a few things to consider in order to help you on your way. Perhaps, these answers will allow you to better articulate what problem you are trying to solve in the future.

Solution 1: Create UPDATE/DELETE triggers to preserve changes to the database so that corrupted data can be easily restored from a sister table (rather than doing a DB restore). For this, see How to Create Trigger to Keep Track of Last Changed Data . I don't like this idea because you are duplicating data and can result in a data maintenance nightmare.

Solution 2: Implement auditing throughout the entire design process by adding Create (for INSERTs) and Update columns, two for each that store timestamps and usernames (4 columns total), to all of your tables and databases. This seems unlikely due to the fact that your firm already has 50 databases with an unknown number of tables and jobs (note: the jobs would need to be updated to reflect this change and re-deployed).

However, it is a good principle to follow because you will be able to track when, and by whom, changes were made to the system without having to duplicate your data (solution 1). This makes tracking down problems and identifying bad data a simple task. Once the bug is fixed, you can re-run the job for the time period that covers all timestamps with bad data so that the data can be fixed. This also serves as a good (perfect?) test case to validate the bug fix. You can take this a step further by adding a separate auditing table that all jobs report to with respect to job name, run times, process dates, file names, folder paths, record counts (INSERT, UPDATE, DELETE, ERROR, BYPASS), etc.

Then, implement foreign keys to your fact tables for easy cross-reference. Keep in mind that you can normalize this task with dimension tables. This is a lot of work from a design and development standpoint, but it will save both the DBA and developer their time and sanity when trying to track what the system does by providing a clear and concise table driven record of the who, what, where, when, and why - as opposed to digging through the logs, or even worse, failure to identify the problem within the current data model.

There are probably more solutions that could follow this, but the above two encapsulate the entire realm of solutions that I can think of, which is this: there is no optimal solution for fixing poor design, coding, and regression testing. You can either put a band aide on it with triggers, or implement a design methodology that can provide some auditing on your tables.

halfer
  • 19,824
  • 17
  • 99
  • 186
J Weezy
  • 3,507
  • 3
  • 32
  • 88
  • Solution #2 only retains the last change. Solution #1 creates a complete log of all changes, thus #1 is far more useful (if more difficult to get information back out of). Solution #3 would be to use temporal tables (unfortunately n/a on sql 2008), which effectively implement solution #1 for you. Solution #1 is _not_ duplicate data, but rather a log of all changes over the life of the record. This is how standard audit systems work. – jleach Apr 02 '18 at 14:40
  • Regarding solution 1, perhaps, but it comes very close to turning that data into a slowly changing dimension (SCD). I shied away from acknowledging this because SCDs contain VALID historical data whereas side-audit tables aim to preserve a valid historical record to fall back on when the current record is INVALID, which really should be handled by database backups. So, solution 1 straddles the gray area that exists between database backups and SCDs. I would not implement solution 1, even if my life depended on it. I only discussed it b/c the OP brought it up. – J Weezy Apr 02 '18 at 14:49