Announcing HAProxy Data Plane API 2.5
Version Update

The focus of the 2.5 version was on expanding support for HAProxy configuration keywords, and that’s where most of the effort during this release cycle was spent. We will continue doing that during the next couple of versions to gain complete feature parity with both the HAProxy configuration and Runtime API so that you can use the Data Plane API as a full-featured way to configure HAProxy. HAProxy is a highly versatile product with a vast number of options, and we are committed to making the Data Plane API the same.

While primarily focused on configuration keywords support, we also looked at ways to improve the codebase. Version 2.5 brings fixes for data races and possible memory leaks and some optimizations in how we handle JSON parsing internally to bring better performance.

We are also looking to make contributing easier. So, our Makefile now has a new target that simplifies code generation from the specification and is more straightforward for future contributors.

Check the HAProxy Data Plane API latest documentation.

Register for our live What’s New in the HAProxy Data Plane API 2.5 Webinar to learn more about this release.

Extended HAProxy Configuration Keywords Support

We focused on covering more HAProxy keywords with the goal of making the API a full-fledged way to configure HAProxy. In that area, version 2.5 brings a lot of improvements.

HTTP & TCP checks as a resource

The http-check and tcp-check configuration keywords change how HAProxy polls backend servers to monitor their health. For example, you can use http-check to change the URL where health check probes are sent, set the HTTP verb to use, or even send a JSON message as part of the health-checking request.

Previous releases of the HAProxy Data Plane API had only basic support for the http-check keyword and no support for the tcp-check keyword. In those versions, you could add an http-check line to a backend section by calling the /services/haproxy/configuration/backends endpoint, as in the following example where we add one to a backend named test:

$ curl -X PUT \
-–user admin:password \
-H "Content-Type: application/json" \
-d '{
"name": "test",
"mode": "http",
"http-check": {
"type": "expect",
"match": "status",
"pattern": "200,201,300-310"
}
}' \
"http://127.0.0.1:5555/v2/services/haproxy/configuration/backends/test?transaction_id=a7c318c3-03db-41cf-9568-96b36ffcc228"

Which would result in the following HAProxy configuration:

backend test
mode http
http-check expect status 200,201,300-310

With this way, you weren’t able to add multiple http-check lines, which prevented you from configuring something like this:

backend test
mode http
http-check connect
http-check send meth GET uri /health ver HTTP/1.1 hdr host test.com
http-check expect status 200-399

Version 2.5 introduces two new resources on the API dedicated to health checks:

  • /v2/services/haproxy/configuration/http_checks

  • /v2/services/haproxy/configuration/tcp_checks

Having http-check and tcp-check represented as API resources of their own, means you can now create multiple lines in one backend section. Here’s an example:

$ curl -X POST \
--user admin:password \
-H "Content-Type: application/json" \
-d '{
"index": 0,
"type": "connect"
}' \
"http://127.0.0.1:5555/v2/services/haproxy/configuration/http_checks?parent_name=test&parent_type=backend?transaction_id=a7c318c3-03db-41cf-9568-96b36ffcc228"
$ curl -X POST \
--user admin:password \
-H "Content-Type: application/json" \
-d '{
"index": 1,
"type": "send",
"method": "GET",
"uri": "/health",
"version": "HTTP/1.1",
"headers": [
{
"name": "host",
"fmt": "test.com"
}
]
}' \
"http://127.0.0.1:5555/v2/services/haproxy/configuration/http_checks?parent_name=test&parent_type=backend?transaction_id=a7c318c3-03db-41cf-9568-96b36ffcc228"
$ curl -X POST \
--user admin:password \
-H "Content-Type: application/json" \
-d '{
"index": 2,
"type": "expect",
"match": "status",
"pattern": "200,201,300-310"
}' \
"http://127.0.0.1:5555/v2/services/haproxy/configuration/http_checks?parent_name=test&parent_type=backend?transaction_id=a7c318c3-03db-41cf-9568-96b36ffcc228"

You can use the tcp-check keyword in a similar way with its respective endpoint.

DEPRECATION WARNING: To remain backward compatible, the http-check section will remain as-is in the /v2/services/haproxy/configuration/backends endpoint, but it will be removed in a future version. If multiple http-check lines exist in your configuration file and you create another with the backends endpoint, it will become the last one in the list.

Expanded HTTP request & response resources

While the http-check keyword relates specifically to health checking, the http-request and http-response keywords cover a range of other, miscellaneous actions. Version 2.5 implements full support for all of those actions, adding those that were missing and bringing the API inline with HAProxy 2.5.

The /v2/services/haproxy/configuration/http_request_rules endpoint now fully supports the following actions:

  • normalize-uri

  • set-timeout

  • set-pathq

  • replace-pathq

  • set-var-fmt

  • wait-for-body

  • deny

  • tarpit

  • return

The /v2/services/haproxy/configuration/http_response_rules endpoint now fully supports the following actions:

  • set-var-fmt

  • wait-for-body

  • deny

  • return

Cache section introduced

HAProxy has supported response caching since version 1.8, and now the HAProxy Data Plane API supports it too. Caching in HAProxy runs in memory, and it will give your services a performance boost and reduce the load on your backend servers. Call the /v2/services/haproxy/configuration/caches API endpoint to create a cache section in your configuration:

$ curl -X POST \
--user admin:password \
-H "Content-Type: application/json" \
-d '{
"name": "mycache",
"total_max_size": 4095,
"max_object_size": 10000,
"max_age": 30
}' \
"http://127.0.0.1:5555/v2/services/haproxy/configuration/caches"

This will create the following configuration:

cache mycache
total-max-size 4095
max-object-size 10000
max-age 30

You are then ready to use this cache by adding filter cachehttp-response cache-store, and http-request cache-use directives to a frontend of backend section via the /v2/services/haproxy/configuration/filters/v2/services/haproxy/configuration/http_response_rules, and /v2/services/haproxy/configuration/http_request_rules API endpoints.

$ curl -X POST \
--user admin:password \
-H "Content-Type: application/json" \
-d '{
"index": 0,
"type": "cache",
"cache_name": "mycache"
}' \
"http://127.0.0.1:5555/v2/services/haproxy/configuration/filters?parent_name=webservers&parent_type=backend&transaction_id=c9391070-db48-478c-bce3-7c1e0759dd68"
$ curl -X POST \
--user admin:password \
-H "Content-Type: application/json" \
-d '{
"index": 0,
"type": "cache-store",
"cache_name": "mycache"
}' \
"http://127.0.0.1:5555/v2/services/haproxy/configuration/http_response_rules?parent_name=webservers&parent_type=backend&transaction_id=c9391070-db48-478c-bce3-7c1e0759dd68"
$ curl -X POST \
--user admin:password \
-H "Content-Type: application/json" \
-d '{
"index": 0,
"type": "cache-use",
"cache_name": "mycache"
}' \
"http://127.0.0.1:5555/v2/services/haproxy/configuration/http_request_rules?parent_name=webservers&parent_type=backend&transaction_id=c9391070-db48-478c-bce3-7c1e0759dd68"

Captures resource introduced

Another resource introduced in version 2.5 is /v2/services/haproxy/configuration/captures, which corresponds to the declare capture keyword in a frontend section. This resource allocates a capture slot, which is a space in memory for storing and logging information about a request or response.

$ curl -X POST \
--user admin:password \
-H "Content-Type: application/json" \
-d '{
"index": 0,
"length": 1000,
"type": "request"
}' \
"http://127.0.0.1:5555/v2/services/haproxy/configuration/captures?frontend=test&transaction_id=c9391070-db48-478c-bce3-7c1e0759dd68"

Read our blog post Introduction to HAProxy Logging for an overview of how and when you should use capture slots.

Global section improvements

To help users unlock the full potential of HAProxy, version 2.5 of the Data Plane API introduces two additions to the /v2/services/haproxy/configuration/global endpoint:

  • a new tune_options field

  • more options for the runtime_apis field

Dozens of performance-tuning directives exist in the global section of an HAProxy configuration. Many of them begin with the prefix tune, such as tune.bufsizetune.ssl.cachesize, and tune.vars.global-max-sizeSee here for the full list. These tune directives let you customize internal HAProxy buffers, memory handling, SSL offloading, and more. You can now set all of them with the Data Plane API via the tune_options field on the /v2/services/haproxy/configuration/global endpoint.

In addition, you can set the runtime_apis field on the same endpoint to configure one or more Runtime API listeners (stats socket lines), such as exposing the Runtime API on additional TCP ports or UNIX sockets. This field has the same options as a bind directive set with the /v2/services/haproxy/configuration/binds endpoint.

Header manipulation options

This version also implements fields related to manipulating HTTP headers.

The endpoints /v2/services/haproxy/configuration/defaults and  /v2/services/haproxy/configuration/frontends support:

  • accept_invalid_http_request

  • h1_case_adjust_bogus_client

The endpoints /v2/services/haproxy/configuration/defaults  and /v2/services/haproxy/configuration/backends  support:

  • accept_invalid_http_response

  • h1_case_adjust_bogus_server

The endpoint /v2/services/haproxy/configuration/global supports:

  • h1_case_adjust

  • h1_case_adjust_file

Minor changes

Along with all of those changes, version 2.5 has a couple of minor modifications. You can now configure HTTP compression via the /v2/services/haproxy/configuration/defaults/v2/services/haproxy/configuration/frontends, and /v2/services/haproxy/configuration/backends endpoints. Specify the compression field to set the compression type, algorithm, and offloading, which represent the compression typecompression algo and compression offload directives in your HAProxy configuration.

This release also adds the ability to configure a stick_table directive in a frontend or backend, which is a feature that has been available in HAProxy for a while now. Previous versions only let you view stick table definitions and entries.

Also, you can configure dynamic_cookie_key in defaults and backend sections.

Code Optimizations & Bug Fixes

Version 2.5 brings some quality-of-life improvements. It fixes a couple of data races when reading the HAProxy Data Plane API configuration file and when reloading its process. It fixes some potential memory and goroutine leaks in the reload and service discovery parts of the code.

To improve performance, we’ve moved from using the JSON parser that ships with golang’s standard library, encoding/json, to the jsoniter library. Also, we’ve moved the API and our underlying libraries completely to go 1.17.

As usual, this release includes a number of bug fixes that have already been backported to HAProxy Data Plane API v2.4.5.

Contributors

We’d like to thank the code contributors who helped make this version possible:

Contributor

Area

Alexander Duryagin

BUG  DOC

Amel Husic

FEATURE  REORG  CLEANUP

Andjelko Iharos

FEATURE  BUG

Dario Tranchitella

OPTIMIZATION  TEST

Davor Kapsa

DOC

Dinko Korunic

BUG  OPTIMIZATION

Georgi Dimitrov

TEST

Goran Galinec

FEATUREBUG  TEST

Ivan Matmati

BUG

Marko Juraga

FEATURE  BUG  BUILD  CLEANUP  TEST

Zlatko Bratkovic

FEATURE  BUG  BUILD  CLEANUP  OPTIMIZATION

Conclusion

This version expands coverage of HAProxy keywords, including better support for configuring health checks, calling http-request and http-response actions, caching, creating capture slots, manipulating HTTP headers, and more. We will continue this effort in the coming versions until we reach full feature parity with HAProxy. In the end, our goal is to make the Data Plane API a first choice for integrating your load balancer into your custom programs and automation.

Updates to the JSON parser and other code optimizations improved performance, and we implemented several bug fixes. Enjoy this latest release, and give us feedback on GitHub if you run into any issues!

Subscribe to our blog. Get the latest release updates, tutorials, and deep-dives from HAProxy experts.