I am currently in the process of putting together a reference architecture for a distributed event-based system where events are stored in a SQL Server Azure database using plain old tables (no SQL Server Service Broker).
Events will be processed using Worker Roles that will poll the queue for new event messages.
In my research, I see a number of solutions that allow for multiple processors to process messages off of the queue. The problem I have with a lot of the patterns I'm seeing is the added complexity of managing locking, etc when multiple processes are trying to access the single message queue.
I understand that the traditional queue pattern is to have multiple processors pulling from a single queue. However, assuming that event messages can be processed in any order, is there any reason not to just create a one-to-one relationship between a queue and its queue processor and just load-balance between the different queues?
queue_1 => processor_1
queue_2 => processor_2
This implementation avoids all of the plumbing necessary to manage concurrent access to the queue across multiple processors. The event publisher can use any load-balancing algorithm to decide which queue to publish messages to.
The fact that I don't see this sort of implementation in any of my searches makes me think I'm overlooking a major deficit in this design.
Edit
This post has triggered a debate over using database tables as queues vs. MSMQ, Azure Queues, etc. I understand that there are a number of native queuing options available to me, including Durable Message Buffers in Azure AppFabric. I've evaluated my options and determined that SQL Azure tables will be sufficient. The intention of my question was to discuss the use of multiple processors against a single queue vs. one processor per queue.