What Is New in Apache Kafka 4.2
Apache Kafka 4.2 brings several production-ready enhancements, with Share Groups moving closer to real-world usage, major upgrades to Kafka Streams, and improved observability across the board.
| Category | Key Changes |
|---|---|
| New Features | Share Groups with KIP-932, Java 25 support, external schema in JsonConverter, rack ID in Admin API, adaptive batching for group coordinator. |
| Improvements | Unified CLI argument style, consistent metric naming, idle ratio metrics, rebalance listener metrics for Streams, fluent CloseOptions. |
| Kafka Streams | Server-side rebalance protocol (KIP-1071), dead letter queue support, anchored punctuation, improved exception handler. |
| Observability | Controller and MetadataLoader idle ratio metrics, feature level metrics, application-id tag for Streams state metrics. |
| Deprecations | Legacy ConsumerGroupMetadata APIs, MX4j support, BrokerNotFoundException. |
Share Groups introduce a new consumption model for Kafka
Share Groups in Kafka 4.2 allow multiple consumers to process records from the same partition concurrently, with individual acknowledgment semantics.
This model works particularly well for queue-style workloads where strict partition-to-consumer stickiness is not required.
In practice, you can use RENEW acknowledgment to extend lock timeouts for long-running records, and choose between batch_optimized or record_limit modes depending on how strict you need the delivery guarantees.
New share partition lag metrics also make it easier to monitor consumption progress and detect imbalances.
Kafka Streams receives major rebalance and error handling upgrades
KIP-1071 brings the server-side group management protocol to Kafka Streams, letting the broker handle task assignment and store topology metadata centrally.
Dead letter queue support is now built into the exception handler with a new Response class, allowing you to route failed records to a dedicated topic while preserving the original raw bytes.
Anchored punctuation with a startTime parameter provides more deterministic behavior for periodic callbacks, especially useful when you need logic to trigger exactly at the start of an hour or day.
Better observability with standardized metrics
Kafka 4.2 cleans up metric naming to follow the kafka.COMPONENT convention and introduces new idle ratio metrics for controller threads and MetadataLoader.
These idle ratio metrics show exactly how much time threads spend waiting instead of processing events, giving you clearer visibility especially in KRaft-based clusters.
The application-id tag added to Streams client-state metrics makes it easier to group and monitor instances belonging to the same logical application.
CLI tools and configuration become more consistent
All command-line tools now use standardized options such as --bootstrap-server and --command-config. Legacy inconsistent arguments have been deprecated.
ConsumerPerformance adds --include support for regex topic filtering, while EndToEndLatency tool improves argument parsing and adds support for message keys and headers.
Adaptive append.linger.ms for the group coordinator removes the previous 5ms floor latency, enabling more flexible batching based on actual workload.
Java 25 support and other notable changes
Kafka 4.2 officially supports Java 25 and exposes rackId through the Admin API for both consumers and share group members.
Dynamic configuration for the remote log manager follower thread pool allows adjusting pool size without restarting brokers.
RecordHeader implementation is now more thread-safe thanks to double-checked locking, reducing the chance of NullPointerException in multi-threaded scenarios.
FAQ
Does Kafka 4.2 change anything major for traditional consumer groups?
No. Traditional consumer groups continue to work exactly as before. Share Groups are a parallel model designed for queue-like semantics and do not replace classic consumer groups.
How do I enable dead letter queue support in Kafka Streams 4.2?
Use the updated exception handler with the new handleError() method and Response class. Failed records can be sent to a DLQ topic while keeping the original raw bytes from the source.
Will the metric naming changes in 4.2 break my existing dashboards?
Old metric names still exist but are deprecated. You should gradually migrate to the new kafka.COMPONENT naming convention to avoid issues in future releases.
Does Share consumer support a strict maximum number of records per fetch?
Yes. You can control this through the ShareAcquireMode configuration. Choose record_limit for hard limits or batch_optimized when you prefer more flexible batching.
What changed with CloseOptions in Kafka Streams 4.2?
CloseOptions now uses a fluent API style with the GroupMembershipOperation enum, giving you clearer control over whether a leave-group request is sent during shutdown instead of a simple boolean flag.