Messaging patterns help in connecting services in a loosely coupled manner. What it means is that services never talk to each other directly. Instead, a service generates and sends a message to a broker (generally a queue) and any other service that is interested in that message can pick it up and process it. We can use Bus or Queue for implementing the messaging pattern.
Messaging systems by default guarantee at least once delivery but they might deliver a message more than once and for such issues we should tailor our queues as such.
Queue is based on the FIFO model so items in the queue are ordered and they get removed after de-queueing. Bus is based on the pub/sub model and there is no guarantee of the order of items and items in the bus should not be deleted after consumption. In Bus all subscribers will receive the message.
Azure Service Bus and AWS SQS have combined some features of queue and bus so we cannot exactly say that Azure Service Bus is a "Bus" since it supports the queue model as well or we cannot say that AWS SQS is a "Queue" since the messages could be not FIFO.
In a pure queue if two consumers want to use a message from a producer, then we need to have two separate queues for each consumer since the message will be deleted after being consumed by consumers. We can easily understand that adding new consumers to the system needs updating the code for adding a new queue and updating the publisher code to send the message to this new queue. This approach violates the Open/Closed Principle. So basically when we talk about message queues there is a 1:1 relationship between consumers and queues.
In message buses, unlike queues, messages are published to the bus, and any application that has subscribed to that bus will receive it. This approach allows applications to follow the open/closed principle since they become open to future changes while remaining closed to additional modification.
If we want to design a resilient system in which entities are decoupled and work independently even if one of them is offline we can combine queue and bus with the fan-out model. In this model, the producer publishes messages to a bus and each application creates a queue and subscribes to that bus. With this separation, the consumer is not bound directly to the producer and can process messages at its own pace. On the other hand, we are not limited to one consumer since each new consumer can subscribe to the bus.
Queue-based levelling pattern
There are times when the load on an application cannot be determined at all times. Although there is consistent and predictable demand for an application most of the time, there are times when this load can go very high leading to failure of service or providing reduced performance or non-availability. Queue-based load levelling pattern can help during such scenarios. The queue acts as a highly available and durable temporary storage that then sends messages to service at a controlled speed thereby reducing disruption at the service end. This pattern ensures that there is no unnecessary scaling up and out of resources by provisioning more instances to meet higher service demand. It has a direct impact on cost as well due to predictable usage and instances of resources.
The messages in RabbitMQ are not persistent which means messages are stored only until a receiving application receives the message in the queue. Once the message is acknowledged, it gets removed from the queue.
RabbitMQ natively supports AMQP protocol which allows you to replace your RabbitMQ broker with any AMQP-based broker.
Messages are not published directly to a queue. Instead, the producer sends messages to an exchange. Exchanges are message routing agents which are responsible for routing the messages to different queues with the help of header attributes, bindings, and routing keys. A binding is a link that you set up to bind a queue to an exchange. The routing key is a message attribute the exchange looks at when deciding how to route the message to queues. For example, you create a binding between an exchange and a queue using a binding key of
pdf_create. Now if the exchange receives a message in which its routing key is
pdf_create it knows that it should be sent to the queue with the binding key of
pdf_create. There are other exchange models like "Topic Exchange" which uses wildcard routes for sending the messages to the queues or "Fan-out Exchange" which simply ignores the routing and publishes messages to all the queues.
The message queue in Kafka is persistent. Messages will stay in the queue until the retention period is passed or the queue size is full. Which means the message is not removed once it's consumed. Instead, it can be replayed or consumed multiple times, which is a setting that can be adjusted.
Kafka uses a custom protocol, on top of TCP/IP for communication between the application and the cluster so it cannot be replaced with another messaging queue since it's the only software implementing this protocol.