What Is New in RabbitMQ 4.0
RabbitMQ 4.0 is a major release focused on modernizing the core architecture and improving performance and reliability. The key changes are summarized below.
| Category | Key Changes |
|---|---|
| New Features |
|
| Performance & Improvements |
|
| Breaking Changes |
|
| Deprecations & Removals |
|
How does the new Khepri metadata store improve reliability?
Khepri replaces Mnesia as the default metadata store, solving long-standing consistency issues. Mnesia stored bindings in separate tables for durable and transient entities, which could lead to "binding not found" errors during node failures under high load. Khepri uses a tree-based data model that eliminates this lock contention.
In practice, this means cluster operations during node restarts are more predictable. Bindings for durable entities are preserved, and the cleanup of transient bindings is far more efficient. For teams that had to implement workarounds for these Mnesia edge cases, Khepri brings welcome stability.
Upgrade Consideration
If you were running RabbitMQ 3.13 with Khepri enabled experimentally, you cannot perform a direct rolling upgrade to 4.0. You must use a blue/green deployment strategy for migration.
Why is AMQP 1.0 now a core protocol?
AMQP 1.0 has been promoted from a plugin to a core, always-on protocol. This reflects its maturity and adoption within the ecosystem, particularly for inter-operability with other messaging systems like Azure Service Bus and Apache Qpid.
The integration is now significantly more efficient, with benchmarked throughput more than double that of the 3.13.x plugin implementation. The protocol's address format for interacting with AMQP 0-9-1 entities (queues, exchanges) has also been simplified, making it easier to reason about.
The old rabbitmq_amqp1_0 plugin still exists but is now a no-op to simplify upgrades. You should remove it from your enabled_plugins file.
What happened to classic mirrored queues?
Classic queue mirroring has been completely removed after a multi-year deprecation period. Classic queues themselves remain, but they are now strictly a non-replicated queue type.
This matters because any policies using mirroring arguments (ha-mode, ha-sync-mode, etc.) will have no effect on classic queues after the upgrade. They will operate as single-replica queues. For replication, you must now use either quorum queues or streams.
Migration should be planned. The official migration guide provides strategies for moving from mirrored classic queues to quorum queues.
Why is there a default redelivery limit on quorum queues?
Quorum queues now enforce a default redelivery limit of 20. Messages exceeding this limit are dead-lettered or dropped. This is a protective measure against "poison message" scenarios where a faulty consumer enters an infinite loop of rejecting and re-queuing the same message.
Such loops prevent the Raft log from compacting, which can lead to uncontrolled disk space consumption and cluster instability. The limit forces these problematic messages out of the active queue.
If your application legitimately expects messages to be redelivered more than 20 times, you must configure a dead-letter exchange or increase the limit via a queue policy. You can disable the limit by setting it to -1, but this is strongly discouraged for production systems.
What are the critical configuration changes before upgrading?
Several deprecated configuration keys are now invalid and will prevent the node from starting. You must clean up your rabbitmq.conf file before upgrading to 4.0.
- Remove
classic_queue.default_version = 1. CQv1 is gone. - Remove the
cluster_formation.randomized_startup_delay_range.minand.maxsettings. - Replace
rabbitmq_amqp1_0.default_vhostwith the globaldefault_vhostsetting. - Replace
mqtt.default_userand similar with the newanonymous_login_userandanonymous_login_passsettings.
Also, ensure all feature flags from 3.13.x are enabled before attempting the upgrade. There is no direct upgrade path from 3.12.x to 4.0.
FAQ
We use classic queues with mirroring for critical data. What should we do before upgrading to 4.0?
You need to migrate your mirrored classic queues to quorum queues or streams before upgrading. Post-upgrade, classic queues will operate as non-replicated queues only. Plan a migration window using the migration guide, which may involve declaring new quorum queues and moving consumers.
A message in our system might be legitimately retried 30 times due to external dependencies. How do we handle the new quorum queue redelivery limit?
Configure a dead-letter policy for the queue to route over-limit messages to another location (a queue or stream) for inspection. Alternatively, you can increase the limit via a policy, but first investigate why so many redeliveries are needed. The default limit is a safeguard, and routinely hitting it often indicates a design issue.
We publish large messages (up to 100MB). What does the new 16 MiB default message size limit mean for us?
After upgrading, publishing messages larger than 16 MiB will fail unless you explicitly increase the max_message_size setting in rabbitmq.conf. However, the recommendation is to avoid multi-megabyte messages in RabbitMQ. Consider using a pattern where the message contains a reference (like an object store ID) to the large payload instead.
Our monitoring dashboard tracks the removed disk I/O metrics. What should we use now?
Those metrics were removed because they were often misleading and better monitored at the OS level. You should shift to using infrastructure-level monitoring tools (like those provided by your cloud vendor or iostat) to track disk I/O, latency, and utilization for the RabbitMQ data directory.
We use Shovel with TLS to connect to a remote broker. Are there any changes?
Yes. Starting with Erlang 26 (which RabbitMQ 4.0 requires), TLS peer certificate verification is enabled by default for Shovel, Federation, and LDAP connections. If your remote broker uses a self-signed certificate or a certificate from a private CA, you may need to configure Shovel to disable verification or provide the appropriate CA certificate. Check the Shovel TLS documentation for the new parameters.