3

I was wondering if it is possible to add/update/delete an SQL Server database table, as well as an Informix database table at the same time.

Both databases will have the same table (data and all), so the query would only change just based on which database it is going to. For some reason, we need the data inside both databases and kept up in real time.

Is it possible to do this with a SQL Trigger or maybe a SProc?

Any insight of how to do this, or a push in the right direction would be very much appreciated.

Jonathan Leffler
  • 730,956
  • 141
  • 904
  • 1,278
leety
  • 177
  • 5

3 Answers3

2

Doing a synchronous update, ie. a distributed transaction by using a linked server, possible for a trigger, while technically possible, I would definitely advise against it. Aaron brings the issue of how reliable XA in general is, but my point is different: availability. Your update in SQL Server will fail if it cannot connect and update in Informix. Downtime (patching, maintenance, not to mention disasters) of the Informix site will imply downtime of the SQL Server site, driving your five 9's toward nine 5's quite fast... This is why I strongly advocate decoupling the application of updates. Transactional Replication is such an example of decoupling and it supports heterogenous environments (ie. Informix client downstream to accept the changes).

You will have a delay of update visibility (state in SQL Server will be reflected in Informix after delay that can be milliseconds, seconds, minutes, even hours in a bad day). And the updates are one way, nothing flows back from Informix to SQL Server. But doing master-master replication in an heterogeneous environment is something that not even Chuck Norris would attempt, just saying.

Jonathan Leffler
  • 730,956
  • 141
  • 904
  • 1,278
Remus Rusanu
  • 288,378
  • 40
  • 442
  • 569
1

Maintaining two different DBMS with a single transaction requires a transaction monitor such as the XA system to coordinate the transactions. There are such systems. The XA specification is typically the underlying standard. Both Microsoft's SQL Server and IBM's Informix work with such systems, and it is possible to have SQL Server and Informix controlled by the same transaction monitor. I have fewer qualms about the technical competency of such systems than the others who've answered; I share their concerns about whether it is appropriate for you.

Such systems are very heavyweight. If you want consistency, all transactions that modify the single table described in the question will need to use the same XA services (plural; likely one for insert, one for update, one for delete) to do so. Further, if the same transactions need to manage any other tables too, then you need to add and use services for those tables as well. It is this aspect that tends to make such systems difficult to manage.

Using a replication system with the potential for delay before the sites are consistent is probably better than trying for absolute synchronicity, unless there are cogent demands for such synchronicity.

If there really is a demand for absolute synchronicity, then use a transaction monitor.

  • Do not roll your own.

They are hard to get right. Handling all the special cases is tricky. And (under the hypothesis that you need absolute synchronicity) doing it wrong is costly but easy.

Jonathan Leffler
  • 730,956
  • 141
  • 904
  • 1,278
0

That depends on your definition of "possible". Technically, you can use a technique called "two-phase commit."

The idea is you send the data to both databases and then a "prepare commit" command which does everything necessary to commit the data except for committing it. If the prepare fails, the commit would fail too. If prepare succeeds, then commit must succeed.

Brilliant idea, doesn't work in practice. One common case is that you send the commit to both databases and one of them gets lost on the way (network outage). Happens rarely but when it happens, you have an inconsistent state and, since this step must not fail, no good way to clean up.

So my solution works like this:

  1. You load the data into a new table which has two extra columns where you can say "server X has seen this record"

  2. You add a job which copies all jobs for server X to server X and updates the respective column. Write the job in such a way that it can be aborted and restarted at any time (i.e. it must be able to cope with cases where data already exists on the target side).

That way, you can distribute the data to any number of servers in a consistent, fault tolerant way.

Aaron Digulla
  • 321,842
  • 108
  • 597
  • 820
  • 1
    Interesting comments ..."Brilliant idea, doesn't work in practice" and ..."Write the job in such a way that it can be aborted and restarted at any time". I'm guessing the OP has to make a call to either trust in a XA compliant framework/product and use two phase commit, or his/her own roll your own coding skills – Brad Aug 22 '12 at 15:47
  • 1
    Actually all comercial implementations of resource managers (SQL Server and Informix being such) and the commonly used Transaction Coordinators like MSDTC know how to handle outages like this. See [in-doubt xact resolution Server Configuration Option](http://msdn.microsoft.com/en-us/library/ms179586.aspx). Not that I'm advocating DTC, but your explanation of lost 'commit' scenario is innacurate. – Remus Rusanu Aug 22 '12 at 16:25
  • @RemusRusanu: The XA community fails to deliver for decades - I have yet to see a product which can recover gracefully from all conditions without manual intervention. And it's overkill. The solution using an intermediate table where you can note which rows have been transferred is simple to implement and unbreakable. When something goes wrong, an average person can fix it. – Aaron Digulla Aug 22 '12 at 16:28
  • @RemusRusanu: Also note that this site isn't meant for complex explanations. The partial commit scenario is the one which is most easy to explain in a few sentences. – Aaron Digulla Aug 22 '12 at 16:29
  • Is hard to disagree with that, I'm definitely not a fan of distributed transactions myself. But the scenarios which require manual interventions are imho more complex that loosing a commit message. – Remus Rusanu Aug 22 '12 at 16:31
  • Just to be sure: I'm talking about restoring consistence when one resource has committed/finished the transaction and one hasn't. – Aaron Digulla Aug 22 '12 at 16:34
  • 1
    I understand. Since both must had ackownledged the prepare (otherwise there wouldn't be a commit) both must guarantee that they can are capable of committing *even in presence of a process shutdown*. Also the coordinator must guarantee that it can give the result ('committed') to the participants later, when asked, even if the it suffers a restart. So resource manager that lost the commit message can ask the DTC about the result once the outage is over, and get the correct answer and to the appropriate action (ie. commit). So, at least in theory, this scenario is covered. – Remus Rusanu Aug 22 '12 at 16:41