1

This is going to be one of those worst-of-the-worst situations where you're going to want to suggest something entirely different. Don't worry, I know, and unfortunately there's nothing I can do about it, so please try to limit your answers to ones that explain the smallest change possible that will make the biggest impact.

With that said, here's my situation: there is a process that replicates data from a Sybase server (15.5) to a SQL Server (2008 R2) every 30 minutes during business hours. On the SQL server, there is a linked server to the Sybase database from which we are copying tables. Here's the scary part: the copying is done by DELETING all of the existing rows and INSERTING the new ones from Sybase, in this fashion (within a stored procedure on the SQL server, triggered by enterprise scheduling software):

-- Table 1
DELETE FROM abc1;
INSERT INTO abc1 (col1, col2, col3)
    SELECT col1, col2, col3 FROM LINKED.SERVER.dbo.abc1;
GO

-- Table 2
DELETE FROM abc2;
INSERT INTO abc2 (col1, col2, col3)
    SELECT col1, col2, col3 FROM LINKED.SERVER.dbo.abc2;
GO

-- ... and so on for hundreds of tables

There are no indices, primary keys, foreign keys, or data integrity whatsoever on the SQL server copy, but we can control the table schemas; we cannot, however, touch the Sybase server. The MSSQL database is basically a read-only environment for reporting and lookups. Performance isn't critical. It's been this way for years and so it lives on...

If you haven't vomited yet, you might have already thought of the problem: every once in a while, for a couple of seconds, our applications and reports cannot find (and sometimes cannot access) the data in the tables while they are being deleted from and inserted to.

I'm no DBA, so things like table locks, hold locks, exclusive locks, etc. are foreign to me, but I gather that perhaps they could be of some help in this situation. After reading through MSDN's article on table hints, my brain is telling me that I should do something with transactions and table locks. I have no idea what, say, the following would do, because I am also unsure how to test it:

SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
GO

-- Table 1
BEGIN TRANSACTION UpdateAbc1
    DELETE FROM abc1 WITH (TABLOCK, HOLDLOCK);
    INSERT INTO abc1 WITH (TABLOCK, HOLDLOCK) (col1, col2, col3)
        SELECT col1, col2, col3 FROM LINKED.SERVER.dbo.abc1;
COMMIT TRANSACTION UpdateAbc1

-- Table 2
BEGIN TRANSACTION UpdateAbc2
    DELETE FROM abc2 WITH (TABLOCK, HOLDLOCK);
    INSERT INTO abc2 WITH (TABLOCK, HOLDLOCK) (col1, col2, col3)
        SELECT col1, col2, col3 FROM LINKED.SERVER.dbo.abc2;
COMMIT TRANSACTION UpdateAbc2

So my questions are:

  • Would modifying the stored procedure to incorporate transactions and locking allow requests to tables that are about to be modified to carry on without coming back empty and/or causing deadlocks?
  • If not, and if you could change something (on the SQL server), what would it be?

Sybase and Microsoft SQL Server are at the point where normal database replication is possible, and it has been considered, but the task is too unwieldy for our group and our budget.

Cᴏʀʏ
  • 105,112
  • 20
  • 162
  • 194

1 Answers1

0

An idea... shuffle tables so the one being queried isn't the one being loaded.

You can use ALTER TABLE .. SWITCH. With SWITCH you can move data between identical tables in the filegroup. Note, the target table must be empty so it may not suit your setup.

In that case, consider hiding the tables with synonyms instead.

Community
  • 1
  • 1
gbn
  • 422,506
  • 82
  • 585
  • 676
  • Along the lines of the synonym idea, what about the process that SSMS takes when changing the structure of a table: create a temporary table, populate it, drop the original and then rename the temp table? Certainly that would be better than the current process, but I'm not sure how the overhead compares WRT transaction logs...our hourly logs are sometimes into the gigabytes. – Cᴏʀʏ Jan 20 '12 at 21:14
  • @Cory I wouldn't physically delete and create tables like that in production. Better to have 2 tables and shuffle. All writes would logged anyway of course. – gbn Jan 20 '12 at 21:33