What Is New in NGINX 1.31
| Category | Change |
|---|---|
| New Features |
|
| Security Fixes |
|
| Breaking Changes |
|
| Improvements |
|
| Bug Fixes |
|
| Deprecations |
|
Should I Upgrade Immediately -- How Bad Are the CVEs?
Yes, for anything internet-facing this is a patch-now release. Six CVEs landed in 1.31.0, and two of them are
serious enough to pull you out of bed: CVE-2026-42945 is a heap buffer overflow in
ngx_http_rewrite_module that can lead to arbitrary code execution in a worker process, and
CVE-2026-40701 is a use-after-free in DNS response processing when ssl_ocsp is enabled, which can
corrupt worker memory. Those two alone justify an emergency maintenance window.
The remaining four are real but narrower in scope. CVE-2026-42926 requires you to use proxy_set_body
with HTTP/2 upstreams -- if you do not, you are not exposed. CVE-2026-42946 only triggers if you proxy through SCGI
or uWSGI. CVE-2026-42934 needs the charset_map directive with UTF-8 decoding in play. CVE-2026-40460 is
an HTTP/3 / QUIC address-spoofing issue, so only relevant if you have QUIC enabled.
Quick exposure checklist
| CVE | Trigger condition | Impact | Priority |
|---|---|---|---|
| CVE-2026-42945 | Any request processed by rewrite rules | Arbitrary code execution in worker | Critical |
| CVE-2026-40701 | ssl_ocsp directive active |
Worker memory corruption / segfault | Critical |
| CVE-2026-42926 | proxy_set_body + HTTP/2 upstream |
Request body injection | High |
| CVE-2026-42946 | SCGI or uWSGI backend | Memory disclosure / segfault | High |
| CVE-2026-42934 | charset_map with UTF-8 |
Limited memory disclosure / segfault | Medium |
| CVE-2026-40460 | HTTP/3 / QUIC with connection migration | Client address spoofing | Medium |
What Breaks After the Upgrade -- Which Headers and DAV Behaviors Changed?
Two behavioral changes will silently break traffic if you are not aware of them, and neither is behind a config flag -- they are enforced unconditionally.
Hop-by-hop headers in HTTP/2 and HTTP/3
NGINX 1.31 now rejects any HTTP/2 or HTTP/3 request that carries the headers
Connection, Proxy-Connection, Keep-Alive, Transfer-Encoding, or
Upgrade. It also rejects the TE header unless its value is exactly trailers.
The HTTP/2 spec (RFC 9113) has always forbidden these hop-by-hop headers in HTTP/2, so NGINX is finally enforcing
what was already a spec violation. In practice, the clients most likely to send these are misconfigured gRPC
clients, old load balancers that blindly forward HTTP/1.1 headers, and home-grown HTTP libraries.
If you sit behind another proxy, check whether it forwards Connection headers downstream. A quick
access_log search for 400 responses right after upgrade will tell you if clients are
hitting this.
# grep for 400s introduced by the new header rejection
grep ' 400 ' /var/log/nginx/access.log | tail -50
# confirm the rejection reason with debug logging on a test vhost
error_log /var/log/nginx/debug.log debug;
WebDAV COPY and MOVE self-referencing
The ngx_http_dav_module now returns an error when a COPY or MOVE operation targets the same resource or
a parent-child relationship between source and destination. Previously this could produce undefined behavior or data
corruption depending on the filesystem. Most WebDAV clients never do this intentionally, but some sync tools issue
speculative MOVE requests that could hit this. Test your WebDAV client against 1.31 before rolling out to a
storage-heavy environment.
What New Load Balancing and Proxy Features Did 1.31 Bring?
Three additions improve how NGINX handles upstream routing and TLS negotiation, and one of them --
least_time in the upstream block -- closes a gap that has sent people to third-party modules for years.
least_time in upstream
The least_time directive is now available directly inside upstream blocks, giving you
response-time-aware load balancing without the ngx_http_upstream_least_time commercial module. It
routes each new connection to the upstream with the lowest average response time combined with the fewest active
connections. Think of it as least_conn with a latency dimension added -- useful when your backends have
meaningfully different processing speeds, like a mix of cached and uncached application nodes.
upstream app_backends {
least_time last; # route by the last response time
server 10.0.0.1:8080;
server 10.0.0.2:8080;
server 10.0.0.3:8080;
keepalive 32;
}
proxy_ssl_alpn in the stream module
The stream module picks up proxy_ssl_alpn, letting you explicitly declare which ALPN protocols NGINX
advertises when opening a TLS connection to a TCP/UDP upstream. Before this, the stream module had no way to tell an
upstream TLS endpoint "I speak h2" -- you had to rely on the upstream's default negotiation. This matters most for
gRPC pass-through and any scenario where your upstream requires a specific ALPN value to route correctly.
stream {
server {
listen 443;
proxy_pass grpc_backend;
proxy_ssl on;
proxy_ssl_alpn h2; # advertise HTTP/2 to the upstream
}
upstream grpc_backend {
server 10.0.0.5:50051;
}
}
ngx_http_tunnel_module authentication
The new ngx_http_tunnel_module gains support for auth_basic, satisfy, and
auth_delay on tunnel connections. This means you can gate CONNECT-based proxy tunnels behind the same
access control patterns you already use for regular HTTP locations -- no separate auth layer needed.
Will the Noisy SSL Log Spam Finally Stop?
Yes. NGINX 1.31 drops the log level for "invalid alert", "record layer failure", and all "SSL alert number N"
messages from crit down to info. If you have ever run NGINX in production and watched your
monitoring system fire alerts because a bot sent a malformed TLS ClientHello, you know exactly why this matters.
Previously these messages cluttered error logs at crit, which most log pipelines treat as something a
human must look at. In reality, the vast majority of them are noisy clients -- scanners, crawlers, mis-implemented
TLS stacks -- not actual infrastructure problems. Lowering them to info means they still appear when
you set error_log ... info for debugging, but they will not wake anyone up.
Watch out if you have alerting rules that grep for SSL-related strings at crit and lower. After
upgrading, those alerts will go quiet, which is the intended outcome -- but double-check that a genuine TLS problem
would still surface through a different signal (like upstream connection failures or 502 rates).
Common Questions about NGINX 1.31
Do I need to change my config files to upgrade to 1.31, or will my existing nginx.conf just
work?
Most configs will work without edits, but two things can cause immediate 400 errors: if any
HTTP/2 or HTTP/3 client sends hop-by-hop headers like Connection or Transfer-Encoding, those requests will be
rejected, and if you use ngx_http_dav_module with COPY or MOVE operations that target the same path, those will
return errors too. Run nginx -t after replacing the binary and test your highest-traffic paths before switching
traffic over.
Am I exposed to CVE-2026-42945 even if I do not use complex rewrite rules?
Yes. The heap buffer
overflow in ngx_http_rewrite_module can be triggered by a specially crafted request regardless of how simple your
rewrite block is -- even a basic rewrite directive is enough to put you in scope. The only safe fix is upgrading to
1.31.0.
How do I tell if ssl_ocsp is active so I can assess my CVE-2026-40701 exposure?
Run grep -r
ssl_ocsp /etc/nginx/ against your config directory. If that returns any results that are not commented out, you are
exposed and should treat this upgrade as urgent. The use-after-free in DNS response processing only triggers when
ssl_ocsp is configured.
Does least_time in the upstream block require any additional modules or compile flags?
No, it
is built into the open-source NGINX binary in 1.31. Previous versions required the commercial nginx-plus upstream
module for least_time behavior. You can use it immediately after upgrading with no recompile.
What replaced the deprecated --without-http_upstream_sticky configure option?
Use
--without-http_upstream_sticky_module when building NGINX from source. The old option name still works in 1.31 but
is flagged as deprecated and will likely be removed in a future release, so update your build scripts now.
Why were HTTP/2 backend connections not being cached with proxy_set_body, and is that bug fixed in
1.31?
Yes, it is fixed. The bug caused NGINX to skip connection reuse for HTTP/2 upstreams whenever
proxy_set_body or proxy_pass_request_body was active, effectively defeating keepalive pooling and creating a new
upstream connection on every request. After upgrading to 1.31, upstream connection caching resumes normally for
these configurations.