HAProxy Technologies is excited to announce the release of HAProxy 2.2, featuring a fully dynamic SSL certificate storage, a native response generator, an overhaul to its health checking system, and advanced ring logging with syslog over TCP.


Sign up for our live webinar Ask Me Anything About HAProxy 2.2. If you missed the webinar about HAProxy 2.2, you can watch it on-demand.


HAProxy 2.2 adds exciting features such as a fully dynamic SSL certificate storage, a native response generator, advanced ring buffer logging with syslog over TCP, security hardening, and improved observability and debugging capabilities. It also touts more customizable error handling and several new features that integrate directly with HAProxy’s highly performantlog-format capabilities, which allow you to build complex strings that use HAProxy’s powerful built-in fetches and converters. The new features allow you to serve responses directly at the edge and generate custom error files on-the-fly, which can be incorporated directly into the new and improved, flexible health check system. This release comes in short succession to the HAProxy Data Plane API 2.0 release just last month.

This release was truly a community effort and could not have been made possible without all of the hard work from everyone involved in active discussions on the mailing list and the HAProxy project GitHub.

The HAProxy community provides code submissions covering new functionality and bug fixes, documentation improvements, quality assurance testing, continuous integration environments, bug reports, and much more. Everyone has done their part to make this release possible! If you’d like to join this amazing community, you can find it on GitHub, Slack, Discourse, and the HAProxy mailing list.

This release builds on the HAProxy 2.1 technical release and is an LTS release.

We’ve put together a complete HAProxy 2.2 configuration, which allows you to follow along and get started with the latest features right away. You will find the latest Docker images here.

HAProxy 2.2 Changelog

In this post, we’ll give you an overview of the following updates included in this release:

Dynamic SSL Certificate Storage
SSL/TLS Enhancements
Native Response Generator
Dynamic Error Handling
Health Check Overhaul
Syslog over TCP
Performance Improvements
Observability & Debugging
HTTP Actions
Security Hardening
New Sample Fetches & Converters

Dynamic SSL Certificate Storage

HAProxy 2.1 added the ability to update SSL certificates that had been previously loaded into memory by using the Runtime API. This has been expanded even further to allow full management of certificates using the in-memory dynamic storage. Easily create, delete, and update certificates on-the-fly. Note that it’s best to have a separate step that writes these files to disk too.

You can also add certificates directly into a crt-list file from the Runtime API. If you are using a directory instead of crt-list file, replace the path below, /etc/haproxy/crt.lst, with your directory path.

It also supports showing all of the certificates that HAProxy has stored in memory with the show ssl cert command.

You can get detailed information about each certificate, such as its expiration date, allowing you to easily verify the certificates that your HAProxy load balancers are using.

This also applies to certificates that are defined within a crt-list.

SSL/TLS Enhancements

Diffie-Hellman is a cryptographic algorithm used to exchange keys in many popular protocols, including HTTPS, SSH and others. HAProxy uses it when negotiating an SSL/TLS connection. Prior versions of HAProxy had generated the algorithm’s parameters using numbers 1024 bits in size. However, as demonstrated in the 2015 paper Imperfect Forward Secrecy: How Diffie-Hellman Fails in Practice, there’s evidence that this is too weak. For some time, HAProxy emitted a warning about this, urging the user to set the tune.ssl.default-dh-param directive to at least 2048:

[WARNING] 162/105610 (14200) : Setting tune.ssl.default-dh-param to 1024 by default, if your workload permits it you should set it to at least 2048. Please set a value >= 1024 to make this warning disappear.

HAProxy will now default to 2048 eliminating this warning on startup. All modern clients support 2048 and it was noted that only older clients such as Java 7 and earlier may not. If necessary, you can still use 1024 by specifying tune.ssl.default-dh-param but this will eliminate the warning for the vast majority of use cases.

You can use the ssl-default-bind-curves global directive to specify the list of elliptic curve algorithms that are negotiated during the SSL/TLS handshake when using Elliptic-curve Diffie-Hellman Ephemeral (ECDHE). Whether you’re using ECDHE depends on the TLS cipher suite you’ve configured with ssl-default-bind-ciphers. When setting ssl-default-bind-curves, the elliptic curves algorithms are separated by colons, as shown here:

HAProxy 2.2 also sets a new default TLS version, since TLSv1.0 has been on its way out the door for quite some time now and has been the culprit behind many popular attacks against TLS. In June of 2018 the PCI-DSS standards began requiring that websites needed to be using TLSv1.1 and above in order to comply. Following suit, all major browsers announced that they would deprecate TLSv1.0 and v1.1 in March of 2020. To aid in the push for complete deprecation of TLSv1.0, HAProxy has selected TLSv1.2 to be the new default minimum version. You can adjust this using the ssl-min-ver directive within the global section.

This version of HAProxy adds more flexibility when it comes to how you store your TLS certificates. Previously, HAProxy required you to specify the public certificate and its associated private key within the same PEM certificate file. Now, if a private key is not found in the PEM file, HAProxy will look for a file with the same name, but with a .key file extension and load it. That behavior can be changed with the ssl-load-extra-files directive within the global section.

It’s now easier to manage a certificate’s chain of trust. Before, if your certificate was issued by an intermediate certificate, you had to include the intermediate in the PEM file so that clients could verify the chain. Now, you can store the intermediate certificate in a separate file and specify its parent directory with issuers-chain-path. HAProxy will automatically complete the chain by matching the certificate with its issuer certificate from the directory. That can cut down on a lot of duplication in your certificate files, since you won’t need to include the issuer certificates in all of them.

There’s a new directive that makes OCSP stapling simpler. Before, to set up OCSP stapling, you would store your site’s certificate PEM file in a directory, along with its issuer certificate in a file with the same name, but with a .issuer extension. Then, you would periodically invoke the openssl ocsp command to request an OCSP response, referencing the issuer file with the -issuer parameter and the site’s certificate with the -cert parameter. HAProxy never sends files with a .issuer extension to clients. Doing so would cause no harm, but it only wastes bandwidth to send a certificate that is likely already in the client’s certificate store. So, issuer files are only used when you manually call the openssl ocsp command with the -issuer parameter. In HAProxy 2.2, if the issuer is a root CA, you can simply include it in the site’s certificate file. Use the new global directive ssl-skip-self-issued-ca to keep the behavior of not sending it to the client during SSL/TLS communication, but now your openssl ocsp command can point to this file for both the -issuer and -cert parameters.

HAProxy allows you to verify client certificates by storing the CA you used to sign them in a PEM file and referencing it with the ca-file argument on a bind line. However, in some cases you may want to authenticate a client certificate using an intermediate certificate, without providing the root CA too. HAProxy had required you to include the entire certificate chain in the file referenced by ca-file, all the way up to the root CA, which meant all intermediate CAs signed with this root CA would be accepted. Using the argument ca-verify-file on a bind line, HAProxy now supports storing the root CA certificate in a completely separate file and this will only be used for verifying the intermediate CA.

When debugging, sometimes it is convenient to use a tool like Wireshark to decrypt the traffic. In order to do this, however, a key log is required. HAProxy 1.9 introduced support for fetching of the SSL session master key through ssl_fc_session_key and HAProxy 2.0 added support for fetching the client and server random data. However, these fetches only cover up to TLS 1.2. HAProxy 2.2 now supports fetching and logging the secrets necessary for decrypting TLS 1.3. First, you must enable the global directive tune.ssl.keylog on. See the Fetches & Converters section for information about the individual fetches.

Improvements have been made to startup times by de-duplicating the ca-file and crl-file directives.

Native Response Generator

Many times you want to return a file or response as quickly to the user as possible directly from the edge of your infrastructure. HAProxy can now generate responses using the new http-request return action without forwarding the request to the backend servers. You can send local files from disk, as well as text that uses the log-format syntax, without resorting to hacks with errorfile directives and dummy backends. This may be a small file like a favicon or gif file, a simple message, or a complex response generated from HAProxy’s runtime information, such as one that shows the requests headers that were received or the number of requests the client has made so far. Here’s an example of sending a favicon.ico file:

Here’s a more complex example, which we test with curl:

Dynamic Error Handling

A new section, http-errors, has been introduced that allows you to define custom errors on a per-site basis. This is convenient when you would like to serve multiple sites from the same frontend but want to ensure that they each have their own custom error pages.

Then, in your frontend, add:

These error sections can also be referenced directly using the new errorfiles directive in a frontend or backend.

Optionally, the new directive http-error status can be used, which defines a custom error message to use instead of the errors generated by HAProxy.

There is now a unification of error handling between the http-request actions return, deny, and tarpit. You can handle a deny or return exactly the same way by specifying headers and a body independently, using raw text or log-format parameters. Error processing is dynamic now and allows you to define errorfile templates with log-format parameters, such as defining a clients unique-id within a response.

Health Check Overhaul

Health checking is at the core of any real-world load balancer. HAProxy supports both passive (monitoring live traffic) and active (polling) health checks, ensuring that your application servers are available before sending traffic to them. Its active health checks are extremely flexible and allow several modes, from basic port checking to sending full HTTP requests, and even communicating with agent software installed on the backend servers. HAProxy also supports non-HTTP, protocol-specific health checks for MySQL, Redis, PostgreSQL, LDAP, and SMTP.

In this release, active health checks have received a major overhaul. Previously, you would configure HTTP checks that specified a particular URL, HTTP version, and headers by using the option httpchk directive:

The syntax is complex, requiring carriage returns and newlines to be included. Now, you can configure HTTP check parameters using the http-check send command instead:

You can send POST requests too:

There’s also the new http-check connect directive, which lets you further fine tune the health checks by enabling SNI, connecting over SSL/TLS, performing health checks over SOCKS4, and choosing the protocol, such as HTTP/2 or FastCGI. You can use its linger option to close a connection cleanly instead of sending a RST. Here’s an example where health checks are performed using HTTP/2 and SSL:

The tcp-check connect directive, which is used for TCP checks, was updated too with all of these optional parameters. Additionally, you can use the {http|tcp}-check comment directive to define a comment that will be reported in the logs if the http-check rule fails.

Additional power comes from the ability to query several endpoints during a single health check. In the following example, we make requests to two, distinct services—one listening on port 8080 and the other on port 8081. We also use different URIs. If either endpoint fails to respond, the entire health check fails.

Both http-check expect and tcp-check expect have been significantly expanded as well, exposing a lot of flexibility in how response data is analyzed during a health check. The first of those changes is the comment option, which supports defining a message to report if the health check fails. The next is the ability to specify the min-recv option, which defines the minimum amount of data required before HAProxy validates the response.

You can control the exact health check status that’s set when the http-check expect rule is successful, hits an error, or times out. The specific codes you can use are described in the documentation. With the on-success and on-error parameters, you can set an informational message that will be reported in the logs when a rule is successfully evaluated or when an error occurs. Both of these options support log-format strings. When using http-check expect you can define a string pattern, which can also use the log-format syntax, that the response body must contain for a successful health check.

Two additional directives have been added that allow you to set and unset custom variables during HTTP and TCP health checks. They are {http|tcp}-check set-var and {http|tcp}-check unset-var.

Finally, MySQL based health checks using the option mysql-check directive were also rebuilt on top of the new tcp-check rules and will now default to a MySQL 4.1 and above client compatible check when a username is defined.

The options with health checks are almost limitless now so make sure to check out the documentation to learn more about unlocking their power.

Syslog over TCP

You can collect HAProxy’s logs in a number of ways: send it to a syslog server, write it to a file or listening socket, write them to stdout / stderr, or store them in memory using HAProxy’s built-in ring buffer. That last method, using a ring buffer, got a boost in version 2.2. A new section, ring, has been introduced, which allows you to define custom ring buffers that can be used as a target for logging and tracing.

A ring buffer is basically a first-in first-out queue that has a fixed size. You can put messages into the ring buffer and then read them off using a lower priority background process. Or, you can store messages there and ignore them until you need them. A ring buffer will never consume more memory than it’s been allotted, so it’s the perfect place to store debug-level logs that you don’t need most of the time.

One way to use a ring buffer in HAProxy is to queue logs to syslog and then send them over TCP, which can be helpful when you want to ensure that every log line is processed and not dropped. TCP is a connection-oriented protocol, which means that it waits for confirmation that the other end received the message. A ring buffer ensures that that won’t slow down the main processing of HAProxy. It should be noted that if more than one server is added to the ring each server will receive the exact same copy of the ring contents and as such the ring will progress at the speed of the slowest server. The recommended method for sending logs to multiple servers is to use one distinct ring per log server.

To begin using the ring buffer section and sending logs to a TCP-based syslog server, define the new ring section as follows. Note the standard syslog port for TCP is 6514:

Then, within a global or frontend section, you would add:

You can also access the ring buffer contents using the Runtime API’s show events command:

Performance Improvements

Several performance related improvements have been made within this release. HAProxy will now automatically deduplicate ca-file and crl-file directives which should improve the overall startup speed. A 5-6% performance increase was observed on spliced traffic after developers added a thread-local pool of recently used pipes to improve cache locality and eliminate unnecessary allocation. They also found that generating a unique-id value for ACLs was extremely slow (O(n^2)) and could take several seconds to start when dealing with thousands of ACL patterns. ACL unique id values are used within the Runtime API to identify ACLs and dynamically change it’s values.. This was reworked and is now typically 100+ times faster. Kubernetes based environments such as OpenShift where the configurations tend to be very large and reloads are frequent will notice a significant performance gain.

The developers significantly reduced the number of syscalls per request for a connection using keep-alive mode. When stopping HAProxy with multithreading enabled, a signal is now immediately broadcasted, eliminating a 1-2 second pause that existed due to relying on other threads’ poll timeout. This will help in scenarios in which you may need to reload often.

Memory pools will now release when there are an abundance of objects created after a traffic surge. This should result in an overall memory reduction in traffic loads that are spikey in nature.

The connection layer has seen several performance improvements, essentially resulting in less syscalls on average, primarily for epoll. Idle server connections can also now be reused between threads, which reduces the number of file descriptors in architectures using a large number of threads and it will significantly increase the reuse rate. HAProxy will no longer close a client connection after an internal response code is served, such as a 401 or 503, unless requested. Status codes 400 (Bad Request) and 408 (Request Timeout) are excluded from this.

HAProxy will now also monitor how many idle connections are needed on a server and kill those that are not expected to be used based on previous period measurements. This should eliminate the previous behavior in which it periodically killed off half of the idle ones, forcing them to be recreated when under a sustained load. Also, a new directive pool-low-conn allows for optimizing server connection pooling and can be tuned to indicate the number of  idling connections to a server required before a thread will reuse a connection. When the idle connection count becomes lower, a thread may only use its own connections or create a new one. This is particularly noticeable in environments with backend servers that have sub-millisecond response times. At the time of writing, the ideal value found was twice the number of configured threads.

It was observed on servers that were 100% saturated and dealing with an excessive amount of traffic that the Runtime API could take tens of seconds to be used.The scheduler is now also latency aware, which means that the Runtime API can be used regardless of what load HAProxy is under. The result is that on a machine saturating 16 threads at 100% forwarding 90 Gbps, the Runtime API will still respond in 70ms, and not one minute.

Observability & Debugging

Observability and the ability to track down issues is always a critical part of any serious software powering your infrastructure. That is one of the reasons why system architects and SREs around the world trust HAProxy to power their infrastructure and platforms.

The Runtime API has a new command show servers conn that allows you to see the current and idle connection state of the servers within a backend. This output is mostly provided as a debugging tool and does not need to be routinely monitored or graphed.

In this release, the HAProxy Stats page reports connect, queue and response time metrics with more accuracy. Before, these numbers were an average over the last 1024 requests—which you can configure with the TIME_STATS_SAMPLES compile-time flag. However, if you haven’t had that many requests yet, which is true after reloading HAProxy, since the counters reset, then the average would include zeroes within the dataset. Now, HAProxy calculates the average using the sum of the actual number of requests, until it reaches the configured TIME_STATS_SAMPLES threshold. This will smooth out the graphs for those who reload often. The HAProxy Stats page also gained new fields that report the number of idle and used connections per server.

A new timing metric, %Tu, has been added, which will return the total estimated time as seen from the client, from the moment the proxy accepted the request to the moment both ends were closed, not including the idle time before the request began. This makes it more convenient to gauge a user’s end-to-end experience and spot slowness at a macro level.

This release also improves on HAProxy’s internal watchdog, which is used to detect deadlocks and kill a runaway process. It was found to be dependent on Linux with threads enabled and is now expanded to support FreeBSD and no longer requires threading to be in use. On operating systems where it is possible and relevant, when the watchdog triggers, a call trace will be produced in best effort.

call trace(20):
| 0x53e2dc [eb 16 48 63 c3 48 c1 e0]: wdt_handler+0x10c
| 0x800e02cfe [e8 5d 83 00 00 8b 18 8b]: libthr:pthread_sigmask+0x53e
| 0x800e022bf [48 83 c4 38 5b 41 5c 41]: libthr:pthread_getspecific+0xdef
| 0x7ffffffff003 [48 8d 7c 24 10 6a 00 48]: main+0x7fffffb416f3
| 0x801373809 [85 c0 0f 84 6f ff ff ff]: libc:__sys_gettimeofday+0x199
| 0x801373709 [89 c3 85 c0 75 a6 48 8b]: libc:__sys_gettimeofday+0x99
| 0x801371c62 [83 f8 4e 75 0f 48 89 df]: libc:gettimeofday+0x12
| 0x51fa0a [48 89 df 4c 89 f6 e8 6b]: ha_thread_dump_all_to_trash+0x49a
| 0x4b723b [85 c0 75 09 49 8b 04 24]: mworker_cli_sockpair_new+0xd9b
| 0x4b6c68 [85 c0 75 08 4c 89 ef e8]: mworker_cli_sockpair_new+0x7c8
| 0x532f81 [4c 89 e7 48 83 ef 80 41]: task_run_applet+0xe1

Building with -DDEBUG_MEM_STATS now provides a new Runtime API command debug dev memstats that dumps the malloc calls for each line of code. This can be helpful for tracking memory leaks and is accessible when expert-mode is set to on:

The debug converter, which has been available since version 1.6, is a handy option that can aid in debugging captured input samples. Previously, it required compiling HAProxy with debug mode enabled. Now, it is always available and will send the output to a defined event sink.

The currently available event sinks are buf0, stdout and stderr. By default, it will log to buf0, which is an internal, rotating buffer. One of the advantages to using the rotating buffer is that you can keep it enabled permanently without worrying about filling up the service logs or dropping logs entirely; It can be consulted on demand using the Runtime API.

Here’s an example of using the debug converter to record IP addresses that are being tracked by a stick table:

Then, using the show events Runtime API command to view the data:

When emitting an alert at startup, HAProxy will now report the exact version and path of the executable. This is helpful on systems where more than one version of HAProxy may be installed; it helps ensure you are working with the appropriate binaries and configurations.

[NOTICE] 165/231825 (7274) : haproxy version is 2.2.0
[NOTICE] 165/231825 (7274) : path to executable is ./haproxy

A new command line flag has been added, “-dW”, also known as “zero warning mode”, which turns any warning emitted at startup into a fatal error. Another way to enable it is by defining zero-warning within the global section.

HTTP Actions

HAProxy’s HTTP actions are a powerful mechanism that allow you to take a defined action against a request; It can provide access control, header manipulation, path rewrites, redirects, and more. HAProxy has always allowed you to take action on a request, such as to add headers before or after it has been processed by a backend application. However, it did not allow you to add custom headers to responses that were generated by HAProxy itself. This release introduces a new directive, http-after-response, which is evaluated at the end of the response analysis, just before forwarding to the client.

A new http-{request|response|after-response} action was added, strict-mode, which enables or disables a strict rewriting mode on all rules that follow it. When strict mode is enabled, any rewrite failure triggers an internal error. Otherwise, such errors are silently ignored. The purpose of strict rewriting mode is to make some rewrites optional while others must be performed to continue the response processing. For example, if a header was too large for the buffer it may be silently ignored. Now, it can fail and report an error.

A new http-request action, replace-path, has been introduced. This action is very similar to replace-uri except that it only acts on the path component. This should improve the experience for users who relied on replace-uri in HTTP/1 and found the behavior changed a bit with the introduction of HTTP/2, which uses an absolute URI.

An example of its usage is as follows:

Security Hardening

HAProxy doesn’t need to call executables at run time, except when using the external-check command directive, which allows you to use external programs for checks, but which you are strongly recommended against using. In fact, in most setups, HAProxy isolates itself within an empty chroot environment. HAProxy will now prevent the creation of new processes at default, effectively disabling the use of external programs for checks completely. This mitigates a whole class of potential attacks that have been raised about the inherent risks of allowing Lua scripts the ability to fork commands using os.execute() and eliminates the potential maliciously injected code to fork a process. If your environment requires the use of external programs for checks, you can re-enable this feature with the new directive global insecure-fork-wanted. Otherwise, attempting to use external-check command will result in the following alert message:

[ALERT] 167/172356 (639) : Failed to fork process for external health check (likely caused by missing ‘insecure-fork-wanted’): Resource temporarily unavailable. Aborting.

Setuid binaries allow users to execute binaries with the permissions of the binary owner and are typically used to allow non-privileged users access to use special privileges. There’s typically not a valid reason to allow HAProxy to execute setuid binaries without the user being well aware of the risks. HAProxy 2.2 now officially prevents the process from executing setuid binaries by default, preventing it from switching uids after the initial switch to the uid defined within the global section. This significantly reduces the risk of privilege escalation. To re-enable the execution of setuid binaries you can use the new global directive insecure-setuid-wanted.

New Sample Fetches & Converters

This table lists fetches that are new in HAProxy 2.2:

Name Description
fc_pp_unique_id Returns the unique ID TLV sent by the client in the PROXY protocol header, if any.
res.body Returns the HTTP response’s available body as a block of data.
res.body_len Returns the length of the HTTP response available body in bytes.
res.body_size Returns the advertised length of the HTTP response body in bytes. It will represent the advertised Content-Length header, or the size of the available data in case of chunked encoding.
res.hdrs Returns the current response headers as string including the last empty line separating headers from the request body.
res.hdrs_bin Returns the current response headers contained in preparsed binary form. This is useful for offloading some processing with SPOE.
so_name Returns a string containing the current listening socket’s name, as defined with name on a bind line.
ssl_fc_client_early_traffic_secret Return the CLIENT_EARLY_TRAFFIC_SECRET as a hexadecimal string when the incoming connection was made over TLS 1.3.
ssl_fc_client_handshake_traffic_secret Return the CLIENT_HANDSHAKE_TRAFFIC_SECRET as a hexadecimal string when the incoming connection was made over TLS 1.3.
ssl_fc_client_traffic_secret_0 Return the CLIENT_TRAFFIC_SECRET_0 as a hexadecimal string when the incoming connection was made over TLS 1.3.
ssl_fc_exporter_secret Return the EXPORTER_SECRET as a hexadecimal string when the incoming connection was made over TLS 1.3.
ssl_fc_early_exporter_secret Return the EARLY_EXPORTER_SECRET as a hexadecimal string when the incoming connection was made over TLS 1.3.
ssl_fc_server_handshake_traffic_secret Return the SERVER_HANDSHAKE_TRAFFIC_SECRET as a hexadecimal string when the incoming connection was made over TLS 1.3.
ssl_fc_server_traffic_secret_0 Return the SERVER_TRAFFIC_SECRET_0 as a hexadecimal string when the incoming connection was made over TLS 1.3.
ssl_s_der Returns the DER formatted certificate presented by the server when the outgoing connection was made over an SSL/TLS transport layer.
ssl_s_key_alg Returns the name of the algorithm used to generate the key of the certificate presented by the server when the outgoing connection was made over an SSL/TLS transport layer.
ssl_s_notafter Returns the end date presented by the server as a formatted string YYMMDDhhmmss[Z] when the outgoing connection was made over an SSL/TLS transport layer.
ssl_s_notbefore Returns the start date presented by the server as a formatted string YYMMDDhhmmss[Z] when the outgoing connection was made over an SSL/TLS transport layer.
ssl_s_i_dn([<entry>[,<occ>[,<format>]]]) When the outgoing connection was made over an SSL/TLS transport layer, returns the full distinguished name of the issuer of the certificate presented by the server when no <entry> is specified, or the value of the first given entry found from the beginning of the DN.
ssl_s_s_dn When the outgoing connection was made over an SSL/TLS transport layer, returns the full distinguished name of the subject of the certificate presented by the server when no <entry> is specified, or the value of the first given entry found from the beginning of the DN.
ssl_s_serial Returns the serial of the certificate presented by the server when the outgoing connection was made over an SSL/TLS transport layer.
ssl_s_sha1 Returns the SHA-1 fingerprint of the certificate presented by the server when the outgoing connection was made over an SSL/TLS transport layer.
ssl_s_sig_alg Returns the name of the algorithm used to sign the certificate presented by the server when the outgoing connection was made over an SSL/TLS transport layer.
ssl_s_version Returns the version of the certificate presented by the server when the outgoing connection was made over an SSL/TLS transport layer.

The ssl_{c,f}_{i,s}_dn fetches also now support LDAPv3 as an alternate output format. There’s also a number of new sample fetches that expose the internals of HTX that are explicitly intended for developer and debugging use.

The following converters have been added:

Name Description
cut_crlf Cuts the string representation of the input sample on the first carriage return (‘\r’) or newline (‘\n’) character found.
digest(<algorithm>) Converts a binary input sample to a message digest.
hmac(<algorithm>, <key>) Converts a binary input sample to a message authentication code with the given key. The result is a binary sample.
htonl Converts the input integer value to its 32-bit binary representation in the network byte order.
ltrim(<chars>) Skips any characters from <chars> from the beginning of the string representation of the input sample.
rtrim(<chars>) Skips any characters from <chars> from the end of the string representation of the input sample.
secure_memcmp(<var>) Compares the contents of <var> with the input value as binary strings in constant time, which helps to protect against timing attacks. Returns a boolean indicating whether both binary strings match.


The following updates apply to Lua modules:

  • You can now prepend the lookup path for Lua modules using lua-prepend-path.Example:
  • It is now possible to intercept HTTP messages from a Lua action and reply to clients.
  • Lua declared actions can now yield using wake_time(). This function may be used to define a timeout when a Lua action returns act:YIELD. It is a way to force the script to re-execute after a short time (defined in milliseconds).
  • set_var and unset_var will now return a boolean indicating success.
  • A new parameter, ifexist, has been added to set_var, which allows a Lua developer to set variables that will be ignored unless the variable name was used elsewhere before.
  • The Server class now has a set_addr function, which you can use to change a backend server’s address and port.
  • A new function, is_resp, has been added to determine whether a channel is a response channel.


Integration with the varnish test suite was released with HAProxy 1.9 and aids in detecting regressions. The number of regression tests has grown significantly since then. This release has added 38 new regression tests, which brings the total number of regression tests to 85.


The parser now supports quotes, braces, and square brackets in arguments. This means it is now possible to write regular expression character classes and groups in regex converters, such as regsub(). They require backslash quoting or having single quotes outside of the double quotes. Here’s an example of how it can be used:

The parser will now also show you the location of where a parsing error has occurred:

[ALERT] 187/084525 (25816) : parsing [haproxy22.cfg:122]: unmatched quote at position 32: http-request set-var(txn.test) ‘str(abc)

The use-server directive now supports rules using dynamic names:

Then you can use curl to select a specific server:

The sha2 converter was introduced in HAProxy 2.1, however its bits argument was not properly validated at startup and, if given an invalid value, would instead fail during the conversion. The configuration parser will now properly validate the bits and fail with an appropriate error message during startup:

[ALERT] 161/201555 (21136) : parsing [haproxy.cfg:67] : error detected in frontend ‘fe_main’ while parsing ‘http-response set-header’ rule : failed to parse sample expression <str(test),sha2(123)]> : invalid args in converter ‘sha2’ : Unsupported number of bits: ‘123’.

The RFC5424 log format had the sub-second and timezone fields added.

The number of connections reported in the output of a quitting proxy now indicates cumulative connections and not active connections

[WARNING] 163/005319 (27731) : Proxy fe_main stopped (cumulated conns: FE: 162, BE: 0).

The Runtime API’s show table command now supports filtering stick table output by multiple filters, allowing for filtering on many columns.

Other changes include:

  • DNS Service Discovery will now reuse information available within the extension parts of an SRV record response.
  • The cookie directive now has an attr field for setting attributes on persistence cookies. This is helpful for adding the SameSite attribute, which is required in Chrome 80 and above.
  • The local peer name can be specified with localpeer within the peers section. This can be overridden with the -L parameter on startup.
  • The Runtime API now allows for escaping spaces.
  • ACLs can no longer be named “or”.
  • Error files that are larger than tune.bufsize will now emit a warning message on startup.
  • The http-request deny directive now supports returning status codes 404 Not Found, 410 Gone, and 413 Payload Too Large.
  • UUID random generation has been improved and is now thread safe.
  • A unique-id can now be sent and received in the PROXY Protocol for connection tracking purposes.
  • The default maxconn will now automatically be set based on the configured ulimit -n.
  • Invalid hex sequences now cause a fatal error.
  • The Python SPOA example code was updated to Python 3.
  • A new option, pp2-never-send-local, was added to revert the old bogus behavior on the server side when using proxy-protocol-v2 in health checks.
  • The overall code base has had significant work done on general reorganization, cleanups and fixes.


We want to thank each and every contributor who was involved in this release. Contributors help in various forms such as discussing design choices, testing development releases and reporting detailed bugs, helping users on Discourse and the mailing list, managing issue trackers and CI, classifying coverity reports, providing documentation fixes and keeping the documentation in good shape, operating some of the infrastructure components used by the project, reviewing patches and contributing code.

The following list doesn’t do justice to all of the amazing people who offer their time to the project, but we wanted to give a special shout out to individuals who have contributed code, and their area of contribution.

Contributor AREA
David Carlier BUILD
Olivier Carrère DOC
Damien Claisse FEATURE
Daniel Corbett BUG DOC
Joseph C. Sible FEATURE
Gilchrist Dadaglo FEATURE
Marcin Deranek FEATURE
Dragan Dosen BUG FEATURE
Olivier Doucet DOC
Dominik Froehlich CLEANUP
Patrick Gansterer FEATURE
Carl Henrik Lunde OPTIM
Olivier Houchard BUG BUILD FEATURE
Björn Jacke DOC
Bertrand Jacquin BUG
Christian Lachner FEATURE
Aleksandar Lazic DOC
Frédéric Lécaille BUG
Adam Mills DOC
Nathan Neulinger BUG
Adis Nezirovic BUG FEATURE
Elliot Otchet FEATURE
Rosen Penev BUG
Julien Pivotto DOC
Gaetan Rivet BUG FEATURE
Balvinder Singh Rawat DOC
Florian Tham FEATURE
Lukas Tribus BUG BUILD DOC
Martin Tzvetanov Grigorov REGTESTS/CI
Mathias Weiersmueller DOC
Miroslav Zagorac CLEANUP DOC
Kevin Zhu BUG
Ben51Degrees BUG