40

I've just read this article, and I'm confused.

Let's imagine 1 webapp and 1 distinct application acting as "worker", both sharing the same database.

Oh, I said "sharing"..but what does the article warns about? :

Fourthly, sharing a database between applications (or services) is a bad thing. It’s just too tempting to put amorphous shared state in there and before you know it you’ll have a hugely coupled monster.

=> disagree. There are some cases where distinct applications still be part of the same unit, and therefore, notion of "coupling issue" makes no sense in this case.

Let's continue: The webapp handles client HTTP requests and may update at any time some aggregates (DDD term), generating the corresponding domain events.
The goal of the worker would be to handle those domain events by processing the needed jobs.

The point is:

How should events data be passed to the worker?

The first solution, as the read article promotes, would be to use RabbitMQ, being a great message-oriented middleware.

The workflow would be simple:

Any time the web dyno generates an event, it publishes it through RabbitMQ, which feeds the worker.
The drawback would be that nothing guarantees the immediate consistency between the commit of the aggregate update and the publishing of the event, without dealing with the potential sending failures... or hardware issues; that is another main issue.

Example: It would be possible that an event was published without a success of the aggregate update...resulting in an event representing a false representation of the domain model.
You could argue that global XA (two-phase commit) exists, but it's not a solution that fits all databases or middlewares.

So what could be a good solution to ensure this immediate consistency? :
IMO, storing the event in database, in the same local transaction as the aggregate update.
A simple asynchronous scheduler would be created and responsible of querying current unpublished events from database and send them to RabbitMQ, which in turn populates the worker.

But why needing an extra scheduler in webapp side and by the way: why needing RabbitMQ in this case?

By this solution, it appears logically, that the RabbitMQ could be unnecessary, especially because the database is shared.
Indeed, whatever the case, we saw that the immediate consistency involves a polling from database.
Thus, why wouldn't worker be responsible directly for this polling?

Therefore, I wonder why so many articles on the web criticizes hardly database queuing, while promoting message-oriented middleware.

Excerpt of the article:

Simple, use the right tool for the job: this scenario is crying out for a messaging system. It solves all the problems described above; no more polling, efficient message delivery, no need to clear completed messages from queues, and no shared state.

And immediate consistency, ignored ?

To sum up, it really seems that whatever the case is, meaning database shared or not, we need database polling.

Did I miss some critical notions?

Thanks

2
  • 2
    Polling is sort of a red herring, because almost all of the major databases have some mechanism for asynchronously notifying some other process that it's time to pull some work out of a table.
    – Blrfl
    Commented Mar 6, 2014 at 0:51
  • 1
    meta.programmers.stackexchange.com/questions/6417/…
    – gnat
    Commented Mar 6, 2014 at 5:41

1 Answer 1

37

If you are building a simple application with low traffic, there is something to be said about keeping another component out of your system. It is very likely that not using a message bus is the right answer for you. However, I would suggest building your system in a way you could swap out the database-based queue system for a middleware solution. I agree with the article. A database is not the right tool for queue based system, but it may be good enough for you.

Queue based system like RabbitMq are built around massive scale on moderate hardware. Their architecture is able to achieve this by avoiding processes which make ACID compliant database system slow by their nature. Since a message bus only needs to ensure a message is stored and successfully processed, it doesn't need to bother with locking and writing transaction logs. Both of these concepts are absolutely required for an ACID system but are often a cause of contention.

Performance-wise it comes down to: you have an SQL table. Lots of reads and lots of writes. Both require some sort of locking to update rows, pages and indexes. Your polling mechanism is constantly locking an index to do lookups on it. This prevents writes from happening; at best they are queued. The code doing the processing is also locking to update the status on the queue as they complete or fail. Yes, you can do query optimization after optimization to get this to work, or you can use a system specifically designed for the work load you are asking. A RabbitMq eats up this type of workload without even breaking a sweat; on top of that, you get to save your database from the workload giving it more room to scale doing other things.

Another thing to consider is most queue systems typically do not use a polling technique (some allow for HTTP, but recommend to avoid using for the receive side). RabbitMq uses network protocols specifically designed for message buses such as AMPQ.

Edit: Adding usage case.

The way I have used Rabbit is I have had an API endpoint which accepts a change which requires a heavily used database table. This table is under constant contention and at times will not be able to save a change in a timely fashion from the API. What I do instead is write the change request to a queue and then have a service which handles these messages as they are able. If database contention occurs the queue simply grows and message processing is delayed. Typically processing time down in the 14ms range, but in times of high contention we get up to 2-3 seconds.

9
  • How could you handle immediate consitency in this case? If the publishing is made but right after, the transaction responsible for updating the domain model rollbacks ... The middleware would be totally unaware and would process the event.
    – Mik378
    Commented Mar 6, 2014 at 0:05
  • You wrote: "it doesn't need to bother with locking". But there is surely a kind of locking in order to ensure the ascending order (in time) of routed events (toward the worker), no?
    – Mik378
    Commented Mar 6, 2014 at 0:10
  • 1
    @Mik378 Take a look at this article on message idempotency. Yes technically you lose some promise of consistency, but I bet you'll find what you gain in terms of reliability of application uptime and performance is well worth it. It is also fairly easy to change the way you process messages to make the losses pretty painless. Commented Mar 6, 2014 at 0:13
  • 2
    Yes you would need locking for guaranteeing order. Some queue systems can provide this at the price of performance. If you can accept the fact that sometimes operations will happen out of order and figure out a way to handle it on the processor side, you will gain exponentially from a performance stand point. Commented Mar 6, 2014 at 0:18
  • 1
    @Mik378 - I added a use case to my answer. I hope it helps! Commented Mar 6, 2014 at 0:27

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.