HAProxy 2.9 is now the latest version. Learn more
HAProxy Technologies is excited to announce the release of HAProxy 2.2, featuring a fully dynamic SSL certificate storage, a native response generator, an overhaul to its health checking system, and advanced ring logging with syslog over TCP.
Watch our on-demand webinar Ask Me Anything About HAProxy 2.2. If you missed the webinar about HAProxy 2.2, you can watch it on-demand as well.
HAProxy 2.2 adds exciting features such as a fully dynamic SSL certificate storage, a native response generator, advanced ring buffer logging with syslog over TCP, security hardening, and improved observability and debugging capabilities. It also touts more customizable error handling and several new features that integrate directly with HAProxy’s highly performantlog-format
capabilities, which allow you to build complex strings that use HAProxy’s powerful built-in fetches and converters. The new features allow you to serve responses directly at the edge and generate custom error files on-the-fly, which can be incorporated directly into the new and improved, flexible health check system. This release comes in short succession to the HAProxy Data Plane API 2.0 release just last month.
This release was truly a community effort and could not have been made possible without all of the hard work from everyone involved in active discussions on the mailing list and the HAProxy project GitHub.
The HAProxy community provides code submissions covering new functionality and bug fixes, documentation improvements, quality assurance testing, continuous integration environments, bug reports, and much more. Everyone has done their part to make this release possible! If you’d like to join this amazing community, you can find it on GitHub, Slack, Discourse, and the HAProxy mailing list.
This release builds on the HAProxy 2.1 technical release and is an LTS release.
We’ve put together a complete HAProxy 2.2 configuration, which allows you to follow along and get started with the latest features right away. You will find the latest Docker images here.
In this post, we’ll give you an overview of the following updates included in this release:
Dynamic SSL Certificate Storage
HAProxy 2.1 added the ability to update SSL certificates that had been previously loaded into memory by using the Runtime API. This has been expanded even further to allow full management of certificates using the in-memory dynamic storage. Easily create, delete, and update certificates on-the-fly. Note that it’s best to have a separate step that writes these files to disk too.
# Add new empty certificate | |
$ echo "new ssl cert /etc/haproxy/certs/wildcard.demo.haproxy.net.pem" |socat tcp-connect:127.0.0.1:9999 - | |
New empty certificate store '/etc/haproxy/certs/wildcard.demo.haproxy.net.pem'! | |
# Create transaction with certificate data | |
$ echo -e -n "set ssl cert /etc/haproxy/certs/wildcard.demo.haproxy.net.pem <<\n$(cat /tmp/wildcard.demo.haproxy.net.pem)\n\n" |socat tcp-connect:127.0.0.1:9999 - | |
Transaction created for certificate /etc/haproxy/certs/wildcard.demo.haproxy.net.pem! | |
# Commit certificate into memory for use | |
$ echo "commit ssl cert /etc/haproxy/certs/wildcard.demo.haproxy.net.pem" |socat tcp-connect:127.0.0.1:9999 - | |
Committing /etc/haproxy/certs/wildcard.demo.haproxy.net.pem | |
Success! |
You can also add certificates directly into a crt-list
file from the Runtime API. If you are using a directory instead of crt-list
file, replace the path below, /etc/haproxy/crt.lst, with your directory path.
$ echo "add ssl crt-list /etc/haproxy/crt.lst /etc/haproxy/certs/wildcard.demo.haproxy.net.pem" |socat tcp-connect:127.0.0.1:9999 - | |
Inserting certificate '/etc/haproxy/certs/wildcard.demo.haproxy.net.pem' in crt-list '/etc/haproxy/crt.lst'. | |
Success! |
It also supports showing all of the certificates that HAProxy has stored in memory with the show ssl cert
command.
$ echo "show ssl cert" |socat tcp-connect:127.0.0.1:9999 - | |
# filename | |
certs/test.local.pem.ecdsa | |
certs/test.local.pem.rsa |
You can get detailed information about each certificate, such as its expiration date, allowing you to easily verify the certificates that your HAProxy load balancers are using.
$ echo "show ssl cert certs/test.local.pem.ecdsa" |socat tcp-connect:127.0.0.1:9999 - | |
Filename: certs/test.local.pem.ecdsa | |
Status: Used | |
Serial: 0474204BCBAEFD4271A9E77AACC35BA92D42 | |
notBefore: Apr 28 11:07:59 2020 GMT | |
notAfter: Jul 27 11:07:59 2020 GMT | |
Subject Alternative Name: DNS:test.local, DNS:test.local | |
Algorithm: EC256 | |
SHA1 FingerPrint: B3B9F41ECD74422EE0DD7A8C7F35CFA3C398CA82 | |
Subject: /CN=test.local | |
Issuer: /C=US/O=Let's Encrypt/CN=Let's Encrypt Authority X3 | |
Chain Subject: /C=US/O=Let's Encrypt/CN=Let's Encrypt Authority X3 | |
Chain Issuer: /O=Digital Signature Trust Co./CN=DST Root CA X3 |
This also applies to certificates that are defined within a crt-list
.
$ echo "show ssl crt-list" |socat tcp-connect:127.0.0.1:9999 - | |
/etc/haproxy/crt.lst | |
$ echo "show ssl crt-list /etc/haproxy/crt.lst" |socat tcp-connect:127.0.0.1:9999 - | |
# /etc/haproxy/crt.lst | |
/etc/haproxy/certs/test.local.pem.ecdsa [alpn h2,http/1.1] | |
/etc/haproxy/certs/wildcard.demo.haproxy.net.pem |
SSL/TLS Enhancements
Diffie-Hellman is a cryptographic algorithm used to exchange keys in many popular protocols, including HTTPS, SSH, and others. HAProxy uses it when negotiating an SSL/TLS connection. Prior versions of HAProxy had generated the algorithm’s parameters using numbers 1024 bits in size. However, as demonstrated in the 2015 paper Imperfect Forward Secrecy: How Diffie-Hellman Fails in Practice, there’s evidence that this is too weak. For some time, HAProxy emitted a warning about this, urging the user to set the tune.ssl.default-dh-param
directive to at least 2048:
[WARNING] 162/105610 (14200) : Setting tune.ssl.default-dh-param to 1024 by default, if your workload permits it you should set it to at least 2048. Please set a value >= 1024 to make this warning disappear.
HAProxy will now default to 2048 eliminating this warning on startup. All modern clients support 2048 and it was noted that only older clients such as Java 7 and earlier may not. If necessary, you can still use 1024 by specifying tune.ssl.default-dh-param
but this will eliminate the warning for the vast majority of use cases.
You can use the ssl-default-bind-curves
global directive to specify the list of elliptic curve algorithms that are negotiated during the SSL/TLS handshake when using Elliptic-curve Diffie-Hellman Ephemeral (ECDHE). Whether you’re using ECDHE depends on the TLS cipher suite you’ve configured with ssl-default-bind-ciphers
. When setting ssl-default-bind-curves
, the elliptic curves algorithms are separated by colons, as shown here:
ssl-default-bind-curves X25519:P-256 |
HAProxy 2.2 also sets a new default TLS version, since TLSv1.0 has been on its way out the door for quite some time now and has been the culprit behind many popular attacks against TLS. In June of 2018 the PCI-DSS standards began requiring that websites needed to be using TLSv1.1 and above in order to comply. Following suit, all major browsers announced that they would deprecate TLSv1.0 and v1.1 in March of 2020. To aid in the push for complete deprecation of TLSv1.0, HAProxy has selected TLSv1.2 to be the new default minimum version. You can adjust this using the ssl-min-ver
directive within the global
section.
This version of HAProxy adds more flexibility when it comes to how you store your TLS certificates. Previously, HAProxy required you to specify the public certificate and its associated private key within the same PEM certificate file. Now, if a private key is not found in the PEM file, HAProxy will look for a file with the same name, but with a .key file extension and load it. That behavior can be changed with the ssl-load-extra-files
directive within the global section.
It’s now easier to manage a certificate’s chain of trust. Before, if your certificate was issued by an intermediate certificate, you had to include the intermediate in the PEM file so that clients could verify the chain. Now, you can store the intermediate certificate in a separate file and specify its parent directory with issuers-chain-path
. HAProxy will automatically complete the chain by matching the certificate with its issuer certificate from the directory. That can cut down on a lot of duplication in your certificate files, since you won’t need to include the issuer certificates in all of them.
There’s a new directive that makes OCSP stapling simpler. Before, to set up OCSP stapling, you would store your site’s certificate PEM file in a directory, along with its issuer certificate in a file with the same name, but with a .issuer extension. Then, you would periodically invoke the openssl ocsp
command to request an OCSP response, referencing the issuer file with the -issuer parameter and the site’s certificate with the -cert parameter. HAProxy never sends files with a .issuer extension to clients. Doing so would cause no harm, but it only wastes bandwidth to send a certificate that is likely already in the client’s certificate store. So, issuer files are only used when you manually call the openssl ocsp
command with the -issuer parameter. In HAProxy 2.2, if the issuer is a root CA, you can simply include it in the site’s certificate file. Use the new global directive ssl-skip-self-issued-ca
to keep the behavior of not sending it to the client during SSL/TLS communication, but now your openssl ocsp
command can point to this file for both the -issuer and -cert parameters.
HAProxy allows you to verify client certificates by storing the CA you used to sign them in a PEM file and referencing it with the ca-file argument on a bind
line. However, in some cases, you may want to authenticate a client certificate using an intermediate certificate, without providing the root CA too. HAProxy had required you to include the entire certificate chain in the file referenced by ca-file, all the way up to the root CA, which meant all intermediate CAs signed with this root CA would be accepted. Using the argument ca-verify-file
on a bind line, HAProxy now supports storing the root CA certificate in a completely separate file and this will only be used for verifying the intermediate CA.
When debugging, sometimes it is convenient to use a tool like Wireshark to decrypt the traffic. In order to do this, however, a key log is required. HAProxy 1.9 introduced support for fetching the SSL session master key through ssl_fc_session_key
and HAProxy 2.0 added support for fetching the client and server random data. However, these fetch only cover up to TLS 1.2. HAProxy 2.2 now supports fetching and logging the secrets necessary for decrypting TLS 1.3. First, you must enable the global directive tune.ssl.keylog on. See the Fetches & Converters section for information about the individual fetches.
Improvements have been made to startup times by de-duplicating the ca-file
and crl-file
directives.
Native Response Generator
Many times you want to return a file or response as quickly to the user as possible directly from the edge of your infrastructure. HAProxy can now generate responses using the new http-request return
action without forwarding the request to the backend servers. You can send local files from disk, as well as text that uses the log-format syntax, without resorting to hacks with errorfile
directives and dummy backends. This may be a small file like a favicon or gif file, a simple message, or a complex response generated from HAProxy’s runtime information, such as one that shows the requests headers that were received or the number of requests the client has made so far. Here’s an example of sending a favicon.ico file:
http-request return content-type image/x-icon file /etc/haproxy/favicon.ico if { path /favicon.ico } |
Here’s a more complex example, which we test with curl
:
http-request return status 200 content-type "text/plain; charset=utf-8" lf-string "Hey there! \xF0\x9F\x90\x98 \nYou're accessing: %[req.hdr(host)]:%[dst_port]%[var(txn.lock_emoji)]\nFrom: %[src].\nYou've made a total of %[sc_http_req_cnt(0)] requests.\n" if { path /hello } |
$ curl -k https://demo.haproxy.local/hello | |
Hey there! ð | |
You're accessing: demo.haproxy.local:443ð | |
From: 192.168.1.25 | |
You've made a total of 7 requests. |
Dynamic Error Handling
A new section, http-errors
, has been introduced that allows you to define custom errors on a per-site basis. This is convenient when you would like to serve multiple sites from the same frontend
but want to ensure that they each have their own custom error pages.
http-errors test.local | |
errorfile 400 /etc/haproxy/errorfiles/test.local/400.http | |
errorfile 403 /etc/haproxy/errorfiles/test.local/403.http | |
http-errors demo.haproxy.net | |
errorfile 400 /etc/haproxy/errorfiles/demo.haproxy.net/400.http | |
errorfile 403 /etc/haproxy/errorfiles/demo.haproxy.net/403.http |
Then, in your frontend
, add:
http-request deny errorfiles test.local if { req.hdr(host) test.local } { src 127.0.0.1 } |
These error sections can also be referenced directly using the new errorfiles
directive in a frontend or backend.
backend be_main | |
errorfiles test.local |
Optionally, the new directive http-error status
can be used, which defines a custom error message to use instead of the errors generated by HAProxy.
There is now a unification of error handling between the http-request
actions return, deny, and tarpit. You can handle a deny or return exactly the same way by specifying headers and a body independently, using raw text or log-format parameters. Error processing is dynamic now and allows you to define errorfile templates with log-format parameters, such as defining clients unique-id within a response.
Health Check Overhaul
Health checking is at the core of any real-world load balancer. HAProxy supports both passive (monitoring live traffic) and active (polling) health checks, ensuring that your application servers are available before sending traffic to them. Its active health checks are extremely flexible and allow several modes, from basic port checking to send full HTTP requests, and even communicating with agent software installed on the backend servers. HAProxy also supports non-HTTP, protocol-specific health checks for MySQL, Redis, PostgreSQL, LDAP, and SMTP.
In this release, active health checks have received a major overhaul. Previously, you would configure HTTP checks that specified a particular URL, HTTP version, and headers by using the option httpchk
directive:
backend servers | |
option httpchk HEAD /health HTTP/1.1\r\nHost:\ test.local | |
server srv1 192.168.1.5:80 check |
The syntax is complex, requiring carriage returns and newlines to be included. Now, you can configure HTTP check parameters using the http-check send
command instead:
backend servers | |
option httpchk | |
http-check send meth HEAD uri /health ver HTTP/1.1 hdr Host test.local | |
server srv1 192.168.1.5:80 check |
You can send POST requests too:
backend servers | |
option httpchk | |
http-check send meth POST uri /health hdr Content-Type "application/json;charset=UTF-8" hdr Host www.mwebsite.com body "{\"id\": 1, \"field\": \"value\"}" | |
server srv1 192.168.1.5:80 check |
There’s also the new http-check connect
directive, which lets you further fine tune the health checks by enabling SNI, connecting over SSL/TLS, performing health checks over SOCKS4, and choosing the protocol, such as HTTP/2 or FastCGI. You can use its linger option to close a connection cleanly instead of sending a RST. Here’s an example where health checks are performed using HTTP/2 and SSL:
backend servers | |
option httpchk | |
http-check connect ssl alpn h2 | |
http-check send meth HEAD uri /health ver HTTP/2 hdr Host www.test.local | |
server srv1 192.168.1.5:443 check |
The tcp-check connect
directive, which is used for TCP checks, was updated too with all of these optional parameters. Additionally, you can use the {http|tcp}-check comment
directive to define a comment that will be reported in the logs if the http-check rule fails.
Additional power comes from the ability to query several endpoints during a single health check. In the following example, we make requests to two, distinct services—one listening on port 8080 and the other on port 8081. We also use different URIs. If either endpoint fails to respond, the entire health check fails.
backend servers | |
option httpchk | |
http-check connect port 8080 | |
http-check send meth HEAD uri /health | |
http-check connect port 8081 | |
http-check send meth HEAD uri /up | |
server server1 127.0.0.1:80 check |
Both http-check expect
and tcp-check expect
have been significantly expanded as well, exposing a lot of flexibility in how response data is analyzed during a health check. The first of those changes is the comment option, which supports defining a message to report if the health check fails. The next is the ability to specify the min-recv option, which defines the minimum amount of data required before HAProxy validates the response.
You can control the exact health check status that’s set when the http-check expect
rule is successful, hits an error, or times out. The specific codes you can use are described in the documentation. With the on-success and on-error parameters, you can set an informational message that will be reported in the logs when a rule is successfully evaluated or when an error occurs. Both of these options support log-format strings. When using http-check expect
you can define a string pattern, which can also use the log-format syntax, that the response body must contain for a successful health check.
Two additional directives have been added that allow you to set and unset custom variables during HTTP and TCP health checks. They are {http|tcp}-check set-var
and {http|tcp}-check unset-var
.
Finally, MySQL based health checks using the option mysql-check
directive were also rebuilt on top of the new tcp-check rules and will now default to a MySQL 4.1 and above client compatible check when a username is defined.
The options with health checks are almost limitless now so make sure to check out the documentation to learn more about unlocking their power.
Syslog Over TCP
You can collect HAProxy’s logs in a number of ways: send them to a syslog server, write them to a file or listening socket, write them to stdout / stderr, or store them in memory using HAProxy’s built-in ring buffer. That last method, using a ring buffer, got a boost in version 2.2. A new section, ring
, has been introduced, which allows you to define custom ring buffers that can be used as a target for logging and tracing.
A ring buffer is basically a first-in-first-out queue that has a fixed size. You can put messages into the ring buffer and then read them off using a lower priority background process. Or, you can store messages there and ignore them until you need them. A ring buffer will never consume more memory than it’s been allocated, so it’s the perfect place to store debug-level logs that you don’t need most of the time.
One way to use a ring buffer in HAProxy is to queue logs to syslog and then send them over TCP, which can be helpful when you want to ensure that every log line is processed and not dropped. TCP is a connection-oriented protocol, which means that it waits for confirmation that the other end received the message. A ring buffer ensures that it won’t slow down the main processing of HAProxy. It should be noted that if more than one server is added to the ring each server will receive the exact same copy of the ring contents and as such the ring will progress at the speed of the slowest server. The recommended method for sending logs to multiple servers is to use one distinct ring per log server.
To begin using the ring buffer section and sending logs to a TCP-based syslog server, define the new ring
section as follows. Note the standard syslog port for TCP is 6514:
ring requests0 | |
description "request logs" | |
format rfc3164 | |
maxlen 1200 | |
size 32764 | |
timeout connect 5s | |
timeout server 10s | |
server request-log 127.0.0.1:6514 |
Then, within a global
or frontend
section, you would add:
log ring@requests0 local7 |
You can also access the ring buffer contents using the Runtime API’s show events
command:
$ echo "show events requests0" |socat tcp-connect:127.0.0.1:9999 - | |
<189>Jun 14 15:58:33 haproxy[22071]: Proxy fe_main started. | |
<190>Jun 14 15:58:40 haproxy[22072]: ::ffff:127.0.0.1:55344 [14/Jun/2020:15:58:40.071] fe_main be_main/server1 0/0/0/1/1 200 799 - - ---- 1/1/0/0/0 0/0 "GET / HTTP/1.1" |
Performance Improvements
Several performance related improvements have been made within this release. HAProxy will now automatically deduplicate ca-file
and crl-file
directives that should improve the overall startup speed. A 5-6% performance increase was observed on spliced traffic after developers added a thread-local pool of recently used pipes to improve cache locality and eliminate unnecessary allocation. They also found that generating a unique-id value for ACLs was extremely slow (O(n^2)) and could take several seconds to start when dealing with thousands of ACL patterns. ACL unique id values are used within the Runtime API to identify ACLs and dynamically change their values. This was reworked and is now typically 100+ times faster. Kubernetes based environments such as OpenShift where the configurations tend to be very large and reloads are frequent will notice a significant performance gain.
The developers significantly reduced the number of syscalls per request for a connection using keep-alive mode. When stopping HAProxy with multithreading enabled, a signal is now immediately broadcasted, eliminating a 1-2 second pause that existed due to relying on other threads’ poll timeout. This will help in scenarios in which you may need to reload often.
Memory pools will now release when there is an abundance of objects created after a traffic surge. This should result in an overall memory reduction in traffic loads that are spikey in nature.
The connection layer has seen several performance improvements, essentially resulting in less syscalls on average, primarily for epoll. Idle server connections can also now be reused between threads, which reduces the number of file descriptors in architectures using a large number of threads and it will significantly increase the reuse rate. HAProxy will no longer close a client connection after an internal response code is served, such as a 401 or 503, unless requested. Status codes 400 (Bad Request) and 408 (Request Timeout) are excluded from this.
HAProxy will now also monitor how many idle connections are needed on a server and kill those that are not expected to be used based on previous period measurements. This should eliminate the previous behavior in which it periodically killed off half of the idle ones, forcing them to be recreated when under a sustained load. Also, a new directive pool-low-conn
allows for optimizing server connection pooling and can be tuned to indicate the number of idling connections to a server required before a thread will reuse a connection. When the idle connection count becomes lower, a thread may only use its own connections or create a new one. This is particularly noticeable in environments with backend servers that have sub-millisecond response times. At the time of writing, the ideal value found was twice the number of configured threads.
It was observed on servers that were 100% saturated and dealing with an excessive amount of traffic that the Runtime API could take tens of seconds to be used.The scheduler is now also latency aware, which means that the Runtime API can be used regardless of what load HAProxy is under. The result is that on a machine saturating 16 threads at 100% forwarding 90 Gbps, the Runtime API will still respond in 70ms, and not one minute.
Observability & Debugging
Observability and the ability to track down issues is always critical part of any serious software powering your infrastructure. That is one of the reasons why system architects and SREs around the world trust HAProxy to power their infrastructure and platforms.
The Runtime API has a new command show servers conn
that allows you to see the current and idle connection state of the servers within a backend. This output is mostly provided as a debugging tool and does not need to be routinely monitored or graphed.
In this release, the HAProxy Stats page reports connect, queue and response time metrics with more accuracy. Before, these numbers were an average over the last 1024 requests—which you can configure with the TIME_STATS_SAMPLES
compile-time flag. However, if you haven’t had that many requests yet, which is true after reloading HAProxy since the counters reset, then the average would include zeroes within the dataset. Now, HAProxy calculates the average using the sum of the actual number of requests, until it reaches the configured TIME_STATS_SAMPLES
threshold. This will smooth out the graphs for those who reload often. The HAProxy Stats page also gained new fields that report the number of idle and used connections per server.
A new timing metric, %Tu, has been added, which will return the total estimated time as seen from the client, from the moment the proxy accepted the request to the moment both ends were closed, not including the idle time before the request began. This makes it more convenient to gauge a user’s end-to-end experience and spot slowness at a macro level.
This release also improves on HAProxy’s internal watchdog, which is used to detect deadlocks and kill a runaway process. It was found to be dependent on Linux with threads enabled and is now expanded to support FreeBSD and no longer requires threading to be in use. On operating systems where it is possible and relevant, when the watchdog triggers, a call trace will be produced in the best effort.
call trace(20):
| 0x53e2dc [eb 16 48 63 c3 48 c1 e0]: wdt_handler+0x10c
| 0x800e02cfe [e8 5d 83 00 00 8b 18 8b]: libthr:pthread_sigmask+0x53e
| 0x800e022bf [48 83 c4 38 5b 41 5c 41]: libthr:pthread_getspecific+0xdef
| 0x7ffffffff003 [48 8d 7c 24 10 6a 00 48]: main+0x7fffffb416f3
| 0x801373809 [85 c0 0f 84 6f ff ff ff]: libc:_sysgettimeofday+0x199
| 0x801373709 [89 c3 85 c0 75 a6 48 8b]: libc:_sysgettimeofday+0x99
| 0x801371c62 [83 f8 4e 75 0f 48 89 df]: libc:gettimeofday+0x12
| 0x51fa0a [48 89 df 4c 89 f6 e8 6b]: hathreaddumpallto_trash+0x49a
| 0x4b723b [85 c0 75 09 49 8b 04 24]: mworkerclisockpair_new+0xd9b
| 0x4b6c68 [85 c0 75 08 4c 89 ef e8]: mworkerclisockpair_new+0x7c8
| 0x532f81 [4c 89 e7 48 83 ef 80 41]: taskrunapplet+0xe1
Building with -DDEBUGMEMSTATS
now provides a new Runtime API command debug dev memstats
that dumps the malloc calls for each line of code. This can be helpful for tracking memory leaks and is accessible when expert-mode is set to on:
$ echo "expert-mode on; debug dev memstats;" |socat /var/run/haproxy.sock - | |
ev_epoll.c:260 CALLOC size: 9600 calls: 4 size/call: 2400 | |
ssl_sock.c:4555 CALLOC size: 64 calls: 1 size/call: 64 | |
ssl_sock.c:2735 MALLOC size: 342 calls: 3 size/call: 114 | |
ssl_ckch.c:913 CALLOC size: 88 calls: 1 size/call: 88 | |
ssl_ckch.c:773 CALLOC size: 56 calls: 1 size/call: 56 | |
ssl_ckch.c:759 CALLOC size: 122 calls: 1 size/call: 122 | |
cfgparse-ssl.c:1041 STRDUP size: 12 calls: 1 size/call: 12 | |
cfgparse-ssl.c:1038 STRDUP size: 668 calls: 1 size/call: 668 | |
cfgparse-ssl.c:253 STRDUP size: 12 calls: 1 size/call: 12 | |
cfgparse-ssl.c:202 STRDUP size: 1336 calls: 2 size/call: 668 | |
hlua.c:8007 REALLOC size: 15328 calls: 7 size/call: 2189 | |
hlua.c:7997 MALLOC size: 137509 calls: 1612 size/call: 85 | |
cfgparse.c:4098 CALLOC size: 256 calls: 8 size/call: 32 | |
cfgparse.c:4075 CALLOC size: 600 calls: 15 size/call: 40 |
The debug
converter, which has been available since version 1.6, is a handy option that can aid in debugging captured input samples. Previously, it required compiling HAProxy with debug mode enabled. Now, it is always available and will send the output to a defined event sink.
The currently available event sinks are buf0, stdout and stderr. By default, it will log to buf0, which is an internal, rotating buffer. One of the advantages to using the rotating buffer is that you can keep it enabled permanently without worrying about filling up the service logs or dropping logs entirely; It can be consulted on demand using the Runtime API.
Here’s an example of using the debug
converter to record IP addresses that are being tracked by a stick table:
tcp-request connection track-sc0 src,debug(track-sc) |
Then, using the show events
Runtime API command to view the data:
$ echo "show events buf0"|socat /var/run/haproxy.sock - | |
<0>2020-06-10T20:54:59.960865 [debug] track-sc: type=ipv4 <192.168.1.17> |
When emitting an alert at startup, HAProxy will now report the exact version and path of the executable. This is helpful on systems where more than one version of HAProxy may be installed; it helps ensure you are working with the appropriate binaries and configurations.
[NOTICE] 165/231825 (7274) : haproxy version is 2.2.0
[NOTICE] 165/231825 (7274) : path to executable is ./haproxy
A new command line flag has been added, “-dW”, also known as “zero warning mode”, which turns any warning emitted at startup into a fatal error. Another way to enable it is by defining zero-warning
within the global
section.
HTTP Actions
HAProxy’s HTTP actions are a powerful mechanism that allows you to take a defined action against a request; It can provide access control, header manipulation, path rewrites, redirects, and more. HAProxy has always allowed you to take action on a request, such as adding headers before or after it has been processed by a backend application. However, it did not allow you to add custom headers to responses that were generated by HAProxy itself. This release introduces a new directive, http-after-response
, which is evaluated at the end of the response analysis, just before forwarding it to the client.
http-after-response set-header Via "%[res.ver] haproxy" |
A new http-{request|response|after-response} action was added, strict-mode, which enables or disables a strict rewriting mode on all rules that follow it. When strict mode is enabled, any rewrite failure triggers an internal error. Otherwise, such errors are silently ignored. The purpose of strict rewriting mode is to make some rewrites optional while others must be performed to continue the response processing. For example, if a header was too large for the buffer it may be silently ignored. Now, it can fail and report an error.
A new http-request action, replace-path, has been introduced. This action is very similar to replace-uri except that it only acts on the path component. This should improve the experience for users who relied on replace-uri in HTTP/1 and found the behavior changed a bit with the introduction of HTTP/2, which uses an absolute URI.
An example of its usage is as follows:
# strip /foo, e.g. turn /foo/bar?q=1 into /bar?q=1 | |
http-request replace-path /foo/(.*) /\1 if { url_beg /foo/ } |
Security Hardening
HAProxy doesn’t need to call executables at run time, except when using the external-check command
directive, which allows you to use external programs for checks, but which you are strongly recommended against using. In fact, in most setups, HAProxy isolates itself within an empty chroot environment. HAProxy will now prevent the creation of new processes at default, effectively disabling the use of external programs for checks completely. This mitigates a whole class of potential attacks that have been raised about the inherent risks of allowing Lua scripts the ability to fork commands using os.execute()
and eliminates the potential maliciously injected code to fork a process. If your environment requires the use of external programs for checks, you can re-enable this feature with the new directive global insecure-fork-wanted
. Otherwise, attempting to use external-check command
will result in the following alert message:
[ALERT] 167/172356 (639) : Failed to fork process for external health check (likely caused by missing ‘insecure-fork-wanted’): Resource temporarily unavailable. Aborting.
Setuid binaries allow users to execute binaries with the permissions of the binary owner and are typically used to allow non-privileged users access to use special privileges. There’s typically not a valid reason to allow HAProxy to execute setuid binaries without the user is well aware of the risks. HAProxy 2.2 now officially prevents the process from executing setuid binaries by default, preventing it from switching uids after the initial switch to the uid
defined within the global
section. This significantly reduces the risk of privilege escalation. To re-enable the execution of setuid binaries you can use the new global directive insecure-setuid-wanted
.
New Sample Fetches & Converters
This table lists fetches that are new in HAProxy 2.2:
Name | Description |
| Returns the unique ID TLV sent by the client in the PROXY protocol header, if any. |
| Returns the HTTP response’s available body as a block of data. |
| Returns the length of the HTTP response available body in bytes. |
| Returns the advertised length of the HTTP response body in bytes. It will represent the advertised Content-Length header, or the size of the available data in case of chunked encoding. |
| Returns the current response headers as string including the last empty line separating headers from the request body. |
| Returns the current response headers contained in preparsed binary form. This is useful for offloading some processing with SPOE. |
| Returns a string containing the current listening socket’s name, as defined with name on a bind line. |
| Return the CLIENT_EARLY_TRAFFIC_SECRET as a hexadecimal string when the incoming connection was made over TLS 1.3. |
| Return the CLIENT_HANDSHAKE_TRAFFIC_SECRET as a hexadecimal string when the incoming connection was made over TLS 1.3. |
| Return the CLIENT_TRAFFIC_SECRET_0 as a hexadecimal string when the incoming connection was made over TLS 1.3. |
| Return the EXPORTER_SECRET as a hexadecimal string when the incoming connection was made over TLS 1.3. |
| Return the EARLY_EXPORTER_SECRET as a hexadecimal string when the incoming connection was made over TLS 1.3. |
| Return the SERVER_HANDSHAKE_TRAFFIC_SECRET as a hexadecimal string when the incoming connection was made over TLS 1.3. |
| Return the SERVER_TRAFFIC_SECRET_0 as a hexadecimal string when the incoming connection was made over TLS 1.3. |
| Returns the DER formatted certificate presented by the server when the outgoing connection was made over an SSL/TLS transport layer. |
| Returns the name of the algorithm used to generate the key of the certificate presented by the server when the outgoing connection was made over an SSL/TLS transport layer. |
| Returns the end date presented by the server as a formatted string YYMMDDhhmmss[Z] when the outgoing connection was made over an SSL/TLS transport layer. |
| Returns the start date presented by the server as a formatted string YYMMDDhhmmss[Z] when the outgoing connection was made over an SSL/TLS transport layer. |
| When the outgoing connection was made over an SSL/TLS transport layer, returns the full distinguished name of the issuer of the certificate presented by the server when no <entry> is specified, or the value of the first given entry found from the beginning of the DN. |
| When the outgoing connection was made over an SSL/TLS transport layer, returns the full distinguished name of the subject of the certificate presented by the server when no <entry> is specified, or the value of the first given entry found from the beginning of the DN. |
| Returns the serial of the certificate presented by the server when the outgoing connection was made over an SSL/TLS transport layer. |
| Returns the SHA-1 fingerprint of the certificate presented by the server when the outgoing connection was made over an SSL/TLS transport layer. |
| Returns the name of the algorithm used to sign the certificate presented by the server when the outgoing connection was made over an SSL/TLS transport layer. |
| Returns the version of the certificate presented by the server when the outgoing connection was made over an SSL/TLS transport layer. |
The ssl{c,f}{i,s}_dn fetches also now support LDAPv3 as an alternate output format. There are also a number of new sample fetches that expose the internals of HTX that are explicitly intended for a developer and debugging use.
The following converters have been added:
Name | Description |
| Cuts the string representation of the input sample on the first carriage return (‘\r’) or newline (‘\n’) character found. |
| Converts a binary input sample to a message digest. |
| Converts a binary input sample to a message authentication code with the given key. The result is a binary sample. |
| Converts the input integer value to its 32-bit binary representation in the network byte order. |
| Skips any characters from <chars> from the beginning of the string representation of the input sample. |
| Skips any characters from <chars> from the end of the string representation of the input sample. |
| Compares the contents of <var> with the input value as binary strings in constant time, which helps to protect against timing attacks. Returns a boolean indicating whether both binary strings match. |
Lua
The following updates apply to Lua modules:
You can now prepend the lookup path for Lua modules using
lua-prepend-path
.
Example:
lua-prepend-path /usr/share/haproxy-lua/?/init.lua | |
lua-prepend-path /usr/share/haproxy-lua/?.lua |
It is now possible to intercept HTTP messages from a Lua action and reply to clients.
local reply = txn:reply() | |
reply:set_status(400, "Bad request") | |
reply:add_header("content-length", "text/html") | |
reply:add_header("cache-control", "no-cache") | |
reply:add_header("cache-control", "no-store") | |
reply:set_body("<html><body><h1>invalid request<h1></body></html>") | |
txn:done(reply) |
Lua declared actions can now yield using
wake_time()
. This function may be used to define a timeout when a Lua action returnsact:YIELD
. It is a way to force the script to re-execute after a short time (defined in milliseconds).set_var
andunset_var
will now return a boolean indicating success.A new parameter, ifexist, has been added to
set_var
, which allows a Lua developer to set variables that will be ignored unless the variable name was used elsewhere before.The Server class now has a
set_addr
function, which you can use to change a backend server’s address and port.A new function,
is_resp
, has been added to determine whether a channel is a response channel.
Testing
Integration with the varnish test suite was released with HAProxy 1.9 and aids in detecting regressions. The number of regression tests has grown significantly since then. This release has added 38 new regression tests, which brings the total number of regression tests to 85.
Miscellaneous
The parser now supports quotes, braces, and square brackets in arguments. This means it is now possible to write regular expression character classes and groups in regex converters, such as regsub()
. They require backslash quoting or having single quotes outside of the double quotes. Here’s an example of how it can be used:
http-request redirect location '%[url,regsub("(foo|bar)([0-9]+)?","\2\1",i)]' |
The parser will now also show you the location of where a parsing error has occurred:
[ALERT] 187/084525 (25816) : parsing [haproxy22.cfg:122]: unmatched quote at position 32: http-request set-var(txn.test) ‘str(abc)
The use-server
directive now supports rules using dynamic names:
use-server %[hdr(srv)] if { hdr(srv) -m found } | |
server app1 172.31.31.151:10000 check | |
server app2 172.31.31.174:10000 check |
Then you can use curl to select a specific server:
$ curl -H 'srv: app2' https://localhost/ |
The sha2
converter was introduced in HAProxy 2.1, however its bits argument was not properly validated at startup and, if given an invalid value, would instead fail during the conversion. The configuration parser will now properly validate the bits and fail with an appropriate error message during startup:
[ALERT] 161/201555 (21136) : parsing [haproxy.cfg:67] : error detected in frontend ‘fe_main’ while parsing ‘http-response set-header’ rule : failed to parse sample expression <str(test),sha2(123)]> : invalid args in converter ‘sha2’ : Unsupported number of bits: ‘123’.
The RFC5424 log format had the sub-second and timezone fields added.
The number of connections reported in the output of a quitting proxy now indicates cumulative connections and not active connections
[WARNING] 163/005319 (27731) : Proxy fe_main stopped (cumulated conns: FE: 162, BE: 0).
The Runtime API’s show table
command now supports filtering stick table output by multiple filters, allowing for filtering on many columns.
$ echo "show table fe_main data.http_req_cnt gt 1 data.http_req_rate gt 3" |socat tcp-connect:127.0.0.1:9999 - | |
# table: fe_main, type: ip, size:1048576, used:1 | |
0x55e7888c2100: key=192.168.1.17 use=0 exp=7973 http_req_cnt=7 http_req_rate(10000)=7 |
Other changes include:
DNS Service Discovery will now reuse information available within the extension parts of an SRV record response.
The
cookie
directive now has an attr field for setting attributes on persistence cookies. This is helpful for adding the SameSite attribute, which is required in Chrome 80 and above.The local peer name can be specified with
localpeer
within thepeers
section. This can be overridden with the -L parameter on startup.The Runtime API now allows for escaping spaces.
ACLs can no longer be named “or”.
Error files that are larger than
tune.bufsize
will now emit a warning message on startup.The http-request deny directive now supports returning status codes 404 Not Found, 410 Gone, and 413 Payload Too Large.
UUID random generation has been improved and is now thread safe.
A unique-id can now be sent and received in the PROXY Protocol for connection tracking purposes.
The default maxconn will now automatically be set based on the configured ulimit -n.
Invalid hex sequences now cause a fatal error.
The Python SPOA example code was updated to Python 3.
A new option,
pp2-never-send-local
, was added to revert the old bogus behavior on the server side when usingproxy-protocol-v2
in health checks.The overall code base has had significant work done on general reorganization, cleanups and fixes.
Contributors
We want to thank each and every contributor who was involved in this release. Contributors help in various forms such as discussing design choices, testing development releases and reporting detailed bugs, helping users on Discourse and the mailing list, managing issue trackers and CI, classifying coverity reports, providing documentation fixes, and keeping the documentation in good shape, operating some of the infrastructure components used by the project, reviewing patches and contributing code.
The following list doesn’t do justice to all of the amazing people who offer their time to the project, but we wanted to give a special shout out to individuals who have contributed code, and their area of contribution.
Contributor | AREA |
---|---|
Baptiste Assmann | BUG CLEANUP FEATURE |
Emeric Brun | BUG FEATURE |
David Carlier | BUILD |
Olivier Carrère | DOC |
Damien Claisse | FEATURE |
Daniel Corbett | BUG DOC |
Joseph C. Sible | FEATURE |
Gilchrist Dadaglo | FEATURE |
William Dauchy | BUG BUILD CLEANUP DOC FEATURE |
Marcin Deranek | FEATURE |
Dragan Dosen | BUG FEATURE |
Olivier Doucet | DOC |
Tim Düsterhus | BUG BUILD CLEANUP DOC FEATURE REGTESTS/CI |
Christopher Faulet | BUG CLEANUP DOC FEATURE REGTESTS/CI |
Dominik Froehlich | CLEANUP |
Patrick Gansterer | FEATURE |
Carl Henrik Lunde | OPTIM |
Emmanuel Hocdet | BUG CLEANUP FEATURE |
Olivier Houchard | BUG BUILD FEATURE |
Björn Jacke | DOC |
Bertrand Jacquin | BUG |
Christian Lachner | FEATURE |
William Lallemand | BUG BUILD CLEANUP DOC FEATURE REGTESTS/CI REORG |
Aleksandar Lazic | DOC |
Frédéric Lécaille | BUG |
Jérôme Magnin | BUG BUILD CLEANUP DOC FEATURE |
Adam Mills | DOC |
Nathan Neulinger | BUG |
Adis Nezirovic | BUG FEATURE |
Elliot Otchet | FEATURE |
Rosen Penev | BUG |
Julien Pivotto | DOC |
Gaetan Rivet | BUG FEATURE |
Ilya Shipitsin | BUG BUILD CLEANUP DOC REGTESTS/CI |
Balvinder Singh Rawat | DOC |
Willy Tarreau | BUG BUILD CLEANUP CONTRIB DOC FEATURE OPTIM REGTESTS/CI REORG |
Florian Tham | FEATURE |
Lukas Tribus | BUG BUILD DOC |
Martin Tzvetanov Grigorov | REGTESTS/CI |
Mathias Weiersmueller | DOC |
Miroslav Zagorac | CLEANUP DOC |
Kevin Zhu | BUG |
Ben51Degrees | BUG |