2.3.21

Latest release in branch 2.3
Released 3 years ago (July 27, 2022)

Software HAProxy
Branch 2.3
Status
End of life
End of life July 27, 2022
First official release version 2.3.0
First official release date 5 years ago (November 05, 2020)
Release notes https://www.haproxy.org/download/2.3/src/CHANGELOG
Source code http://git.haproxy.org/?p=haproxy-2.3.git;a=tree;h=refs/tags/v2.3.21
Download https://www.haproxy.org/download/2.3/
HAProxy 2.3 Releases View full list

What Is New in HAProxy 2.3

Category Key Highlights
New Features DNS Service Discovery, Log Forwarding, Event-Driven Agent Checks, Prometheus Exporter, Stick Tables in peers
Improvements HTTP Connection Management, Cache Performance, SSL/TLS Enhancements, Logging
Bug Fixes Numerous fixes across HTTP, SSL, DNS, and connection handling

How does HAProxy 2.3 improve dynamic service discovery?

HAProxy 2.3 introduces a native DNS Service Discovery (DNS SRV) layer. This allows HAProxy to resolve a domain name and get a list of active servers with their ports, automatically updating the backend configuration.

In practice, this means you can point your backend to a domain name like backend my_app service=srv+http://api.internal.example.com and HAProxy will handle the rest. This is a huge step forward for dynamic environments like Kubernetes where service endpoints change frequently.

What new logging capabilities were added?

The release adds a powerful log forwarding feature. You can now define multiple log targets and have HAProxy forward its logs to them asynchronously using a ring buffer.

This matters because it decouples logging from the main request processing loop. Your HTTP performance stays consistent even if a remote syslog server becomes slow or unresponsive, as logs are buffered and sent in the background.

How are health checks more efficient now?

Agent checks have been re-architected to be event-driven. Previously, each agent check operated on its own timer, which could lead to thundering herds. Now, they're managed centrally and scheduled more intelligently.

This reduces unnecessary load on your agents, especially in large-scale deployments with thousands of backend servers. The checks are more evenly distributed and consume fewer system resources.

What metrics can the new Prometheus exporter provide?

A native Prometheus exporter is now built directly into the HAProxy process. You can expose metrics by adding an exporter section to your configuration and pointing Prometheus to it.

This gives you deep insight into HAProxy's internal state, including frontend, backend, and server metrics, without needing an external exporter. It simplifies the monitoring stack and reduces latency for metric collection.

Can stick tables now be shared between peers?

Yes, stick table data can now be shared between HAProxy nodes in a peers setup. This enables stateful load balancing across a cluster, which is critical for maintaining user session persistence in high-availability configurations.

Before this, you had to rely on other methods for sharing state. Now, tracking and syncing data like IP addresses or cookies across multiple load balancers is handled natively.

FAQ

Does the new DNS SRV discovery replace the need for a dataplane API?
Not exactly. They serve different purposes. DNS SRV is great for basic, DNS-driven service discovery. The Dataplane API offers full, dynamic configuration control. You might use both: the API for complex config changes and DNS for simple server list updates.

Is the Prometheus exporter ready for production use?
Yes, it's a core feature. It exposes a wide range of metrics on a dedicated endpoint. However, for extremely high-performance environments, test its overhead with your specific metric scraping interval.

What happens if the log forwarder's ring buffer fills up?
The default behavior is to discard the oldest logs. You can configure the size of the ring buffer to match your needs and ensure it's large enough to handle bursts of traffic and temporary network issues with your log servers.

How does event-driven agent checking improve upon the old method?
The old method used a timer per check, which could cause many checks to fire simultaneously. The new central scheduler spreads them out, preventing spikes in CPU and network usage and making health checking more efficient.

Can I use the new stick table peers feature for rate limiting across multiple HAProxy instances?
Absolutely. This is a primary use case. By syncing stick tables, you can implement a global rate limit that tracks requests from a client IP across your entire fleet of load balancers, preventing them from exceeding limits by hitting different nodes.

Releases In Branch 2.3

Version Release date
2.3.21 3 years ago
(July 27, 2022)
2.3.20 3 years ago
(April 29, 2022)
2.3.19 4 years ago
(March 14, 2022)
2.3.18 4 years ago
(March 02, 2022)
2.3.17 4 years ago
(January 11, 2022)
2.3.16 4 years ago
(November 24, 2021)
2.3.15 4 years ago
(November 04, 2021)
2.3.14 4 years ago
(September 07, 2021)
2.3.13 4 years ago
(August 17, 2021)
2.3.12 4 years ago
(July 08, 2021)
2.3.11 4 years ago
(July 07, 2021)
2.3.10 4 years ago
(April 23, 2021)
2.3.9 5 years ago
(March 30, 2021)
2.3.8 5 years ago
(March 25, 2021)
2.3.7 5 years ago
(March 16, 2021)
2.3.6 5 years ago
(March 03, 2021)
2.3.5 5 years ago
(February 06, 2021)
2.3.4 5 years ago
(January 13, 2021)
2.3.3 5 years ago
(January 08, 2021)
2.3.2 5 years ago
(November 28, 2020)
2.3.1 5 years ago
(November 13, 2020)
2.3.0 5 years ago
(November 05, 2020)
2.3-dev9 5 years ago
(October 31, 2020)
2.3-dev8 5 years ago
(October 24, 2020)
2.3-dev7 5 years ago
(October 17, 2020)
2.3-dev6 5 years ago
(October 10, 2020)
2.3-dev5 5 years ago
(September 25, 2020)
2.3-dev4 5 years ago
(September 11, 2020)
2.3-dev3 5 years ago
(August 14, 2020)
2.3-dev2 5 years ago
(July 31, 2020)
2.3-dev1 5 years ago
(July 17, 2020)
2.3-dev0 5 years ago
(July 07, 2020)