6

I'm using an SQL Server 2008 R2 as a queuing mechanism. I add items to the table, and an external service reads and processes these items. This works great, but is missing one thing - I need mechanism whereby I can attempt to select a single row from the table and, if there isn't one, block until there is (preferably for a specific period of time).

Can anyone advise on how I might achieve this?

Martin Smith
  • 438,706
  • 87
  • 741
  • 845
Barguast
  • 5,926
  • 9
  • 43
  • 73
  • question has been asked many times on SO; please search SO – Mitch Wheat Jan 14 '11 at 00:02
  • 4
    Good link on general area. http://rusanu.com/2010/03/26/using-tables-as-queues/ Edit: Not sure that it answers your specific question about blocking until items are added to the queue though. Also I'm not sure I've seen that specific question before on SO. – Martin Smith Jan 14 '11 at 00:04
  • 2
    You know that SQL Server has *actual* queues, right? http://msdn.microsoft.com/en-us/library/ms345108(v=sql.90).aspx –  Jan 14 '11 at 00:10
  • 4
    @Will See "Why not use built-in Queues?" in the link above. – Martin Smith Jan 14 '11 at 00:12
  • @Mitch. Using SQL Server as a queue, perhaps, but I don't my specific question - about waiting for data during a retrieval operation - has been asked. Certainly not frequently. – Barguast Jan 14 '11 at 01:12
  • I've looked at the service broker, but it appears to be a little overkill. That, and I have some requirements (distributing single items to multiple queues in specific circumstances being one) which felt a little easier to achieve with a custom solution. – Barguast Jan 14 '11 at 01:14

5 Answers5

9

The only way to achieve a non-pooling blocking dequeue is WAITFOR (RECEIVE). Which implies Service Broker queues, with all the added overhead.

If you're using ordinary tables as queues you will not be able to achieve non-polling blocking. You must poll the queue by asking for a dequeue operation, and if it returns nothing, sleep and try again later.

I'm afraid I'm going to disagree with Andomar here: while his answer works as a generic question 'are there any rows in the table?' when it comes to queueing, due to the busy nature of overlapping enqueue/dequeue, checking for rows like this is a (almost) guaranteed deadlock under load. When it comes to using tables as queue, one must always stick to the basic enqueue/dequeue operations and don't try fancy stuff.

Remus Rusanu
  • 288,378
  • 40
  • 442
  • 569
  • We use regular SQL tables as queues, under heavy loads (200K+/hr) and have never experienced a deadlock. The very nature of a queue structure supports this, you insert at one end (ie the end of the table) and dequeue from the other (ie the top of the table via SELECT TOP 1). In real world usage, it's more complicated when things are transactional, but this is the gist of it – Didaxis Aug 22 '14 at 15:06
7

"since SQL Server 2005 introduced the OUTPUT clause, using tables as queues is no longer a hard problem". A great post on how to do this.

http://rusanu.com/2010/03/26/using-tables-as-queues/

mcintyre321
  • 12,996
  • 8
  • 66
  • 103
  • 4
    That's a great article, although according to http://stackoverflow.com/a/940001/8479 the locking hints aren't quite correct - e.g. need UPDLOCK – Rory Jul 25 '13 at 10:41
2

I need mechanism whereby I can attempt to select a single row from the table and, if there isn't one, block until there is (preferably for a specific period of time).

You can loop and check for new rows every second:

while not exists (select * from QueueTable)
    begin
    wait for delay '00:01'
    end

Disclaimer: this is not code I would use for a production system, but it does what you ask.

Andomar
  • 232,371
  • 49
  • 380
  • 404
  • Thanks. This is essentially the workaround I'm using at the moment but as you suggest it's not really the best solution. Specifically, I'd prefer something that doesn't involve looping - some kind of wait operation which either times out, or is kicked into action by a new row added to the table (via a trigger?) – Barguast Jan 14 '11 at 01:17
1

The previous commenter that suggested using Service Broker likely had the best answer. Service Broker allows you to essentially block while waiting for more input.

If Service Broker is overkill, you should consider a different approach to your problem. Can you provide more details of what you're trying to do?

RMD
  • 3,421
  • 7
  • 39
  • 85
  • Essentially, I'm looking for the best way of passing work items between distributed services (currently on the same machine but, in future, they may be on different machines on the same LAN). In some cases I need the same work item to be sent to two different queues, hence the reason I originally chose to attempt my own implementation. Any advice would be welcomed. At the moment I'm considering a combination of service broker, along with a set of stored procedures that'll handle the distribution to multiple queues. – Barguast Jan 14 '11 at 09:52
  • If the nature of these work items is asynchronous, you should consider something like MSMQ. MSMQ can provide reliable message delivery across service/machine boundries, including things like transmission retrial. But if you want to say "inside" SQL Server, Service Broker is exactly what you're looking for. Each service/machine would host a copy of SQL Server Express (or better) which would act as the local queue. It would then manage the process of transmitting your messages to remote services. It even manages the notifications on your end of message receipt. It supports multicasting too. – RMD Jan 14 '11 at 16:30
0

Let me share my experiences with you in this area, you may find it helpful.

My team first used MSMQ transactional queues that would feed our asynchronous services (be they IIS hosted or WAS). The biggest problem we encountered was MS DTC issues under heavy load, like 100+ messages/second load; all it took was one slow database operation somewhere to start causing timeout exceptions and MS DTC would bring the house down so to speak (transactions would actually become lost if things got bad enough), and although we're not 100% certain of the root cause to this day, we do suspect MS DTC in a clustered environment has some serious issues.

Because of this, we started looking into different solutions. Service Bus for Windows Server (the on-premise version of Azure Service Bus) looked promising, but it was non-transactional so didn't suit our requirements.

We finally decided on the roll-your-own approach, an approach suggested to us by the guys who built the Azure Service Bus, because of our transactional requirements. Essentially, we followed the Azure Worker Role model for a worker role that would be fed via some queue; a polling-blocking model.

Honestly, this has been far better for us than anything else we've used. The pseudocode for such a service is:

hasMsg = true

while(true)

    if(!hasMsg)
         sleep

    msg = GetNextMessage

    if(msg == null)
        hasMsg = false
    else
        hasMsg = true

    Process(msg);

We've found that CPU usage is significantly lower this way (lower than traditional WCF services).

The tricky part of course is handling transactions. If you'd like to have multiple instances of your service read from the queue, you'll need to employ read-past/updlock in your sql, and also have your .net service enlist in the transactions in a way that will roll-back should the service fail. in this case, you'll want to go with retry/poison queues as tables in addition to your regular queues.

Didaxis
  • 8,486
  • 7
  • 52
  • 89
  • Is this pseudo code for your app layer? How do you implement GetNextMessage so multiple works can work on separate messages at the same time? Using row lock? Any more detail would be appreciated. – David Roussel Aug 15 '14 at 15:45
  • @David, my last paragraph in my example explains how this is accomplished. It is a little more involved however, as the actual queue operations need to be handled specially (i.e., if the transaction should be rolled back, we still need the message to move to the retry/poison queues as applicable). GetNextMessage is trivial, the work is getting SQL to do what we want here. In a nutshell, a queue client class was used that handles enlisting in the transaction, and a SQL CLR was used on the SQL side that allows enlisting in the .NET transaction. – Didaxis Aug 22 '14 at 14:51