What Is New in HAProxy 2.3
| Category | Key Highlights |
|---|---|
| New Features | DNS Service Discovery, Log Forwarding, Event-Driven Agent Checks, Prometheus Exporter, Stick Tables in peers |
| Improvements | HTTP Connection Management, Cache Performance, SSL/TLS Enhancements, Logging |
| Bug Fixes | Numerous fixes across HTTP, SSL, DNS, and connection handling |
How does HAProxy 2.3 improve dynamic service discovery?
HAProxy 2.3 introduces a native DNS Service Discovery (DNS SRV) layer. This allows HAProxy to resolve a domain name and get a list of active servers with their ports, automatically updating the backend configuration.
In practice, this means you can point your backend to a domain name like backend my_app service=srv+http://api.internal.example.com and HAProxy will handle the rest. This is a huge step forward for dynamic environments like Kubernetes where service endpoints change frequently.
What new logging capabilities were added?
The release adds a powerful log forwarding feature. You can now define multiple log targets and have HAProxy forward its logs to them asynchronously using a ring buffer.
This matters because it decouples logging from the main request processing loop. Your HTTP performance stays consistent even if a remote syslog server becomes slow or unresponsive, as logs are buffered and sent in the background.
How are health checks more efficient now?
Agent checks have been re-architected to be event-driven. Previously, each agent check operated on its own timer, which could lead to thundering herds. Now, they're managed centrally and scheduled more intelligently.
This reduces unnecessary load on your agents, especially in large-scale deployments with thousands of backend servers. The checks are more evenly distributed and consume fewer system resources.
What metrics can the new Prometheus exporter provide?
A native Prometheus exporter is now built directly into the HAProxy process. You can expose metrics by adding an exporter section to your configuration and pointing Prometheus to it.
This gives you deep insight into HAProxy's internal state, including frontend, backend, and server metrics, without needing an external exporter. It simplifies the monitoring stack and reduces latency for metric collection.
Can stick tables now be shared between peers?
Yes, stick table data can now be shared between HAProxy nodes in a peers setup. This enables stateful load balancing across a cluster, which is critical for maintaining user session persistence in high-availability configurations.
Before this, you had to rely on other methods for sharing state. Now, tracking and syncing data like IP addresses or cookies across multiple load balancers is handled natively.
FAQ
Does the new DNS SRV discovery replace the need for a dataplane API?
Not exactly. They serve different purposes. DNS SRV is great for basic, DNS-driven service discovery. The Dataplane API offers full, dynamic configuration control. You might use both: the API for complex config changes and DNS for simple server list updates.
Is the Prometheus exporter ready for production use?
Yes, it's a core feature. It exposes a wide range of metrics on a dedicated endpoint. However, for extremely high-performance environments, test its overhead with your specific metric scraping interval.
What happens if the log forwarder's ring buffer fills up?
The default behavior is to discard the oldest logs. You can configure the size of the ring buffer to match your needs and ensure it's large enough to handle bursts of traffic and temporary network issues with your log servers.
How does event-driven agent checking improve upon the old method?
The old method used a timer per check, which could cause many checks to fire simultaneously. The new central scheduler spreads them out, preventing spikes in CPU and network usage and making health checking more efficient.
Can I use the new stick table peers feature for rate limiting across multiple HAProxy instances?
Absolutely. This is a primary use case. By syncing stick tables, you can implement a global rate limit that tracks requests from a client IP across your entire fleet of load balancers, preventing them from exceeding limits by hitting different nodes.