It’s a little known fact that Santa Clause was an early queue innovator. Faced with the problem of delivering a planet full of presents in one night, Santa, in his hacker’s workshop, created a Present Distribution System using thousands of region based priority present queues for continuous delivery by the Rudolphs. Rudolphs? You didn’t think there was only one Rudolph did you? Presents are delivered in parallel by a cluster of sleighs, each with redundant reindeer in a master-master configuration. Each Rudolph is a cluster leader and they coordinate work using an early and more magical version of the ZooKeeper protocol.
Programmers have followed Santa’s lead and you can find a message queue in nearly every major architecture profile on HighScalability. Historically they may have been introduced after a first generation architecture needed to scale up from their two tier system into something a little more capable (asynchronicity, work dispatch, load buffering, database offloading, etc). If there’s anything like a standard structural component, like an arch or beam in architecture for software, it’s the message queue.
- Decoupling. Producers and consumers are independent and can evolve and innovate seperately at their own rate.
- Redundancy. Queues can persist messages until they are fully processed.
- Scalability. Scaling is acheived simply by adding more queue processors.
- Elasticity & Spikability. Queues soak up load until more resources can be brought online.
- Resiliency. Decoupling implies that failures are not linked. Messages can still be queue even if there’s a problem on the consumer side.
- Delivery Guarantees. Queues make sure a message will be consumed eventually and even implement higher level properties like deliver at most once.
- Ordering Guarantees. Coupled with publish and subscribe mechanisms, queues can be used message ordering guarantees to consumers.
- Buffering. A queue acts a buffer between writers and readers. Writers can write faster than readers may read, which helps control the flow of processing through the entire system.
- Understanding Data Flow. By looking at the rate at which messages are processed you can identify areas where performance may be improved.
- Asynchronous Communication. Writers and readers are independent of each other, so writers can just fire and forget will readers can process work at their own leisure.
Here’s a bonus use from praptak in a Hacker News thread:
- Punch through. Ability to route through very restrictive network setups (“galvanically isolated” networks.) MQs, being latency insensitive, can even go over protocols like e-mail.