Introduction
RabbitMQ 4 has officially been released. With that, High Availability (HA) for classic queues is no longer supported. That means, we are only left with vanilla classic queues and release notes promote strongly to quorum queues as being future-proof alternative. Weβll walk through why this matters and how to adapt.
The Era of Classic/Ephemeral Queues
Why We Needed Them
Classic queues with mirroring were the go-to solution for:
- Temporary workloads (RPC patterns, short-lived tasks)
- Simple HA through queue mirroring
- Automatic cleanup via exclusive/auto-delete flags
To better illustrate it, consider this scenario:
You have dynamic consumers that frequently come up and up, connecting and disconnecting from the RabbitMQ broker. Each consumer needs its own dedicated queue that should be automatically created when the consumer starts and cleaned up when it stops. This tight coupling between queue and consumer lifecycles is a common pattern for temporary, consumer-specific workloads.
The RabbitMQ 4 Reality Check
With mirrored classic queues deprecated:
-
No HA guarantees for existing implementations
-
Message loss risk during node failures
-
Complex mirror management becomes unsupported
Switch to Quorum queues
While quorum queues require more system resources compared to classic queues, they provide significantly better message safety and reliability guarantees. To migrate to quorum queues or maybe do something different like switching to streams, all this depends on your use case.
The Catch: Queue Lifecycle Management
Let's assume you concluded that Quorum queues are the most suitable and what you want to migrate your classic ones. Then, there is an important consideration: they don't support auto-delete or exclusive flags, requiring explicit queue lifecycle management.
Migration Strategy
-
No auto-delete - Must use expire policy
-
DLX required for message recovery (optional depending how tolerant your system to data loss)
-
Dynamic and random queue names
No Auto-Delete β Expire Policy
Quorum queues don't support auto-delete like classic queues. Queues would linger forever after consumers disconnect.
A solution for that to set an expire policy when creating the queue.
args := amqp.Table{
"x-expires": 3600000, // 1 hour in milliseconds
"x-queue-type": "quorum"
}
Workflow:
-
Consumer starts β Creates queue
item_1692547325_abc123
-
Consumer disconnects β Queue stays alive for 1 hour
-
If no new consumers connect within 1h β Queue auto-deletes
-
All messages move to DLX (
items_dlx
)
DLX for Message Recovery (optional)
DLX stands for Dead Letter Exchange. When a queue is deleted (e.g., due to TTL expiry), its messages would normally be lost. A DLX acts as a safety net by catching these messages.
// Declare your DLX
channel.ExchangeDeclare("items_dlx", "direct", true, false, false, nil)
// Configure queue to use DLX
args := amqp.Table{
"x-dead-letter-exchange": "items_dlx",
"x-dead-letter-routing-key": "item.event"
}
// Finally, Bind new queues to DLX
channel.QueueBind(newQueueName, "item.event", "items_dlx", false, nil)
Message Flow Example:
-
Queue
item_1692547325_abc123
expires with 5 messages -
Messages get published to
items_dlx
with routing keyitem.event
-
New queue
item_1692548901_def456
binds to same routing key -
Messages automatically route to new queue
Dynamic Queue Names
Traditional static queue names may cause conflicts for different reason.
Routing Magic:
All queues use same routing key pattern. DLX doesn't care about queue names - only routing keys. And new queues automatically receive messages via binding.
Side note: You can also assign unique routing keys per consumer. This way, when a consumer's queue expires, its messages will automatically route to that consumer's new queue through the DLX.
Wrap up
Classic queues canβt guarantee HA anymore. Migrating to quorum queues could be the option for you. If so, there are a few key adjustments to do along the way.