What Is New in RabbitMQ 4.2
RabbitMQ 4.2.0 introduces significant new features, performance improvements, and essential fixes. The following table summarizes the key changes.
| Category | Key Changes |
|---|---|
| Breaking Changes |
|
| New Features |
|
| Improvements |
|
| Bug Fixes |
|
How does server-side filtering work with RabbitMQ streams?
RabbitMQ 4.2 adds SQL-like filter expressions for streams consumed via AMQP 1.0. This lets the broker evaluate filters and only deliver matching messages to the consumer.
You can filter on message headers, properties, and application properties using comparison, logical, and arithmetic operators. This reduces network traffic and client-side processing overhead significantly for selective consumers.
Example Filter Expression
order_type IN ('premium', 'express') AND
(customer_region LIKE 'EU-%' OR customer_region = 'US-CA') AND
UTC() < properties.absolute-expiry-time AND
NOT cancelled
This is more powerful than the property filter expressions introduced in 4.1. It's ideal for use cases like prioritizing certain orders or filtering events by region and expiry.
What should AMQP 1.0 client libraries do about message durability?
Starting with 4.2, if an AMQP 1.0 client omits the header section, RabbitMQ will assume the durable field is false, aligning with the AMQP 1.0 specification.
This is a breaking change. To mark a message as durable, client libraries must now explicitly set the durable field in the header section to true. The RabbitMQ team recommends libraries send messages as durable by default, and all official client libraries already do this.
In practice, if you rely on durable messages for persistence, verify your AMQP 1.0 client library's behavior after upgrading. Non-durable messages might be lost on broker restart.
Why is Khepri now the default, and what does it mean for my cluster?
Khepri, a Raft-based metadata store, is now the default for new RabbitMQ 4.2 clusters. It provides stronger consistency guarantees during network issues and can improve performance for certain operations.
Existing clusters upgrading to 4.2 will continue using their current store (Mnesia or Khepri). However, enabling Khepri is recommended for its benefits. You can enable it with: rabbitmqctl enable_feature_flag khepri_db.
Mnesia support will be removed in a future major version. If you haven't tested with Khepri yet, now is the time. Be aware that enabling the khepri_db feature flag while the Log Exchange is enabled could previously cause memory exhaustion; this has been fixed in 4.2.
What are message interceptors and what can I use them for?
Message interceptors are a new plugin mechanism that allows you to hook into the message flow for AMQP 0-9-1, AMQP 1.0, MQTTv3, and MQTTv5 protocols. They can inspect or modify messages as they enter or leave the broker.
Two built-in interceptors are included: one adds timestamps to outgoing messages, and another sets the client ID for publishing MQTT clients. You can develop custom interceptors for tasks like validation, annotation, auditing, or custom routing logic.
This feature opens up possibilities for cross-cutting concerns without modifying application code. Think of it as middleware for your message broker.
When should I use the new 'local' shovel protocol?
The new local shovel protocol is designed for high-throughput, low-latency data movement within a single RabbitMQ cluster. It uses intra-cluster connections and internal APIs instead of separate TCP connections.
Use it when you need to move messages between queues or exchanges in the same cluster with maximum efficiency. It's not for cross-cluster communication. For that, stick with AMQP 0-9-1 or AMQP 1.0 shovels.
This matters because it reduces connection overhead and can handle credit flow more efficiently, leading to higher throughput compared to standard shovel protocols for internal workflows.
What performance and stability improvements were made for quorum queues?
Several key enhancements improve quorum queue behavior in large-scale or stressed environments.
- Gradual Leadership Transfer: In clusters with tens of thousands of quorum queues, leader elections now happen gradually. This prevents timeouts and queues being left without a leader.
- Partition Handling: Fixed an issue where messages routed to quorum queues during a network partition might not be re-republished correctly.
- Log File Accumulation: Fixed a bug where quorum queues with poison message handling disabled could accumulate excessive Raft log segment files.
These changes make quorum queues more resilient during node failures and network issues, which is critical for production systems relying on them for data safety.
FAQ
I'm upgrading from 3.13.x with classic mirrored queues. What's the best path?
Use the enhanced blue-green deployment tooling in rabbitmqadmin v2. It provides more automated commands to help migrate from classic mirrored queues to quorum queues or streams, which are the modern, supported queue types.
Do I need to enable any new feature flags when upgrading to 4.2.0?
No. The set of required feature flags is the same as in RabbitMQ 4.1.x and 4.0.x. You only need to enable the khepri_db flag if you want to switch your existing cluster's metadata store.
Can I run 4.2.0 nodes in a cluster with 4.1.x nodes?
Yes, for rolling upgrades. RabbitMQ 4.2.0 nodes are compatible with 4.1.x and 4.0.x nodes in a mixed cluster. However, 4.2-specific features like SQL stream filters won't be available until all nodes are upgraded. Keep mixed-version periods short (a few hours).
My monitoring uses Prometheus metrics starting with `rabbitmq_raft`. What should I do?
Update your dashboards and alerts. The metrics for Ra-based components (quorum queues, Khepri, Stream Coordinator) have changed in 4.2. Some were removed, many added, and some renamed. Update to the latest version of the official RabbitMQ-Quorum-Queues-Raft Grafana dashboard.
How do I prevent a user from being deleted via the Management HTTP API?
Tag the user with protected using the CLI: rabbitmqctl set_user_tags "username" "protected". This prevents deletion or modification over the HTTP API. To remove protection, delete the tag or delete and re-create the user.