Version 2.0 of the HAProxy Data Plane API brings some exciting enhancements that unlock the power of HAProxy’s flexible configuration and its runtime capabilities.
The HAProxy Data Plane API, which lets you manage your HAProxy configuration dynamically using HTTP RESTful commands, has marked a major milestone with the release of version 2.0. This release emphasizes the runtime aspects of HAProxy, giving you the ability to manage map files, stick tables, peers, DNS resolvers and more. It also introduces fields that had been missing from the global
, defaults
, frontend
and backend
sections, but that are important for tuning the performance and security of HAProxy.
The HAProxy Data Plane API was introduced in conjunction with HAProxy 2.0 in June of last year, answering the call from the community to give HAProxy a modern API that would allow users to manage their load balancers programmatically. Since then, it has evolved rapidly, and, with this release, brings about some changes that make a clean break from the 1.x releases. The prefix of the API path has been incremented from /v1 to /v2 and you’ll find documentation embedded inside the API at /v2/docs.
In this post, you’ll see what’s new, with examples that show the capabilities in action. We’ll cover:
Find out more by registering for our webinar: “Learn the HAProxy Data Plane API 2.0“
Breaking Changes
HAProxy Data Plane API 2.0 brings several major changes to the API specification that are considered to be breaking changes and completely different from version 1.0. You will find that the cookie
and balance
parameters in the /services/haproxy/configuration/defaults
and /services/haproxy/configuration /backends
endpoints have been reworked from strings to objects, allowing them to support all of the available options.
The id
field, which appears in the request body of some endpoints, was renamed to index
as we felt this was a more appropriate term. The index
field allows you to specify the order in which you would like a rule to appear among other rules of the same type. You can use “0” to specify it as the first rule. To set a rule as the last one, you will need to get a list of the current rules and increment the last index by 1.
Map Files
A map file stores key-value pairs, which HAProxy can reference at runtime. Often, it’s used for making Layer 7 routing decisions, such as looking up which backend
to route a client to, based on the request’s Host header or URL path. The big advantage of storing this data in a file, as opposed to the HAProxy configuration itself, is the ability to change the mapped values dynamically. Read our blog post, Introduction to HAProxy Maps, to learn the basics of map files.
With this release of the Data Plane API, you can upload map files to an HAProxy server and modify them without requiring a reload. Here’s how to do it: First, if you want to persist future changes that you make to the map file after you’ve uploaded it to the HAProxy server, such as calls to add new entries, then set a new flag called --update-map-files
to true when starting the Data Plane API. This allows it to write the data to disk, but otherwise the initial file will be uploaded, but future changes will be held in memory only. This tells it to periodically check for in-memory changes and write them to disk. You can also set --maps-dir
to change the directory where map files are written, overriding the default of /etc/haproxy/maps. The directory must be created beforehand. You can also use --update-map-files-period
to define how often, in seconds, changes are saved to disk. It defaults to 10 seconds.
Four new API endpoints let you upload map files, delete them, and manage their key-value pairs.
Method | Endpoint | Description | |
GET, POST |
| Create a new map file or return all available map files. | |
DELETE, GET |
| Delete a map file or return its description | |
GET, POST |
| Add an entry into a map file or return its entries | |
DELETE, GET, PUT |
| Return, update, or delete a single map file entry |
Let’s say you have a file called urls.map locally, and its contents look like this:
$ cat urls.map | |
/api/ be_api | |
/documentation/ be_documentation | |
/blog/ be_blog |
In this example, URL paths are mapped to the names of backends defined within your HAProxy configuration. So, a request for /api/ would be sent to the be_api backend. Use the /services/haproxy/runtime/maps endpoint to upload this file to the HAProxy server:
$ curl -X POST \ | |
-F "fileUpload=@urls.map" \ | |
-u dataplaneapi:password \ | |
http://192.168.50.20:5555/v2/services/haproxy/runtime/maps |
Next, use the /services/haproxy/configuration/backendswitchingrules endpoint to create a backend switching rule that routes client requests depending on the URL:
$ curl -X POST \ | |
-H 'Content-Type: application/json' \ | |
-u dataplaneapi:password \ | |
-d '{"name":"%[path,lower,map_beg(/etc/haproxy/maps/urls.map,be_main)]","index":0}' \ | |
"http://192.168.50.20:5555/v2/services/haproxy/configuration/backend_switching_rules?frontend=fe_main&version=1" |
This adds a use_backend
rule to HAProxy, which selects a backend from the map file, using the requested URL path as an input; It defaults to the be_main backend if the requested URL isn’t mapped to any value. The following example shows it in action:
After you’ve added a backend switching rule, you can query the /services/haproxy/runtime/maps endpoint to see all of your defined map files:
$ curl -X GET -u dataplaneapi:password 192.168.50.20:5555/v2/services/haproxy/runtime/maps | |
[ | |
{ | |
"description": "pattern loaded from file '/etc/haproxy/maps/urls.map' used by map at file '/etc/haproxy/haproxy.cfg' line 37", | |
"file": "/etc/haproxy/maps/urls.map", | |
"id": "1" | |
} | |
] |
You can also add a new row to the map file by calling the /services/haproxy/runtime/maps_entries endpoint.
$ curl -X POST \ | |
-H 'Content-Type: application/json' \ | |
-u dataplaneapi:password \ | |
-d '{"key":"/images/", "value": "be_static"}' \ | |
"http://192.168.50.20:5555/v2/services/haproxy/runtime/maps_entries?map=urls.map" |
Or, you can delete a row by passing its key to the same endpoint. Be sure to URL encode the path if it contains slashes or other reserved characters, as is the case with /api/:
$ curl -X DELETE \ | |
-u dataplaneapi:password \ | |
"192.168.50.20:5555/v2/services/haproxy/runtime/maps_entries/%2Fapi%2F?map=urls.map" |
Stick Tables
Stick tables are fast, in-memory storage embedded inside HAProxy, and you can use them to track client behavior, such as to count the number of requests a client has made over a period of time. This is useful for detecting malicious behavior, setting rate limits, and enforcing usage caps on APIs. Learn more about stick tables in our blog post, Introduction to HAProxy Stick Tables.
With this release of the Data Plane API, you can fetch information about stick tables, in real time, to view their configuration or get the data they’ve captured. Three new endpoints are available:
Method | Endpoint | Description |
GET |
| Returns an array of all stick table definitions. |
GET |
| Returns one stick table definition. |
GET |
| Returns an array of all entries in a given stick table. |
As you could in earlier versions of the API, you can create a new stick table by attaching a stick_table
block when you create a backend with the /services/haproxy/configuration/backends endpoint. Here, we create a new backend named login_requests and add a stick table to it:
$ curl -X POST \ | |
-H 'Content-Type: application/json' \ | |
-u dataplaneapi:password \ | |
-d '{"name": "login_requests", "stick_table": {"type": "ip", "size": 1000000, "expire": 600, "store": "http_req_rate(10m)"}}' \ | |
'192.168.50.20:5555/v2/services/haproxy/configuration/backends?version=1' |
To see all defined stick tables, call the new /services/haproxy/runtime/stick_tables endpoint:
$ curl -X GET -u dataplaneapi:password \ | |
'192.168.50.20:5555/v2/services/haproxy/runtime/stick_tables' |
Or, you can see a specific stick table definition by including its name in the URL. In this example, you get information about a stick table called login_requests:
$ curl -X GET -u dataplaneapi:password \ | |
'192.168.50.20:5555/v2/services/haproxy/runtime/stick_tables/login_requests?process=1' |
Use the sticktableentries endpoint to see records in a stick table:
$ curl -X GET -u dataplaneapi:password \ | |
'192.168.50.20:5555/v2/services/haproxy/runtime/stick_table_entries?process=1&stick_table=http_requests' |
You can begin tracking data by adding an HTTP request rule:
$ cat urls.map | |
/api/ be_api | |
/documentation/ be_documentation | |
/blog/ be_blog |
This would add the following http-request
line to the fe_main frontend:
http-request track-sc0 src table login_requests if { path_beg /login } |
The following animation shows fetching stick table information:
Peers
When you run two HAProxy load balancers in a cluster, you want stick table data to be synchronized between them so that it will be available in case of a failover. For example, you want session persistence data to be shared so that a client is sent to the same server regardless of which load balancer they’re routed through. Or if you’re tracking counters, you want that data to be available on the secondary. That’s where a peers configuration comes in since it allows you to define where your secondary load balancer is so that the data can be transferred in the background. First, let’s see how this looks in an HAProxy configuration, then we’ll see how to set it up using the Data Plane API.
A peers
configuration section looks like this, which you would add to each load balancer:
peers lb_cluster | |
peer host1 192.168.50.20:10000 | |
peer host2 192.168.50.21:10000 |
You should list all nodes in the peers list, including the local server. That way, you can use the same list on all nodes, and it has the added bonus of preserving stick table data locally after a reload because the old process will connect to the new one using its own address and push all of its entries to it.
Enable sync by adding a peers
parameter to a stick table definition:
backend http_requests | |
stick-table type ip size 1m expire 10m peers lb_cluster store http_req_rate(10m) |
Version 2.0 of the HAProxy Data Plane API lets you add a peers
section and populate it with nodes. Four new endpoints have been added:
Method | Endpoint | Description |
GET, POST |
| Add a new peers section or get a list of all configured ones |
GET, DELETE |
| Return or delete an individual peer section |
GET, POST |
| Add new peer entries or return a list of configured peer entries for specified peers section |
GET, PUT, DELETE |
| Return, update, or delete a single peer entry |
To add a new peers section, use the /services/haproxy/configuration/peer_section
endpoint. Here, we add a new peers
section named lb_cluster:
$ curl -X POST \ | |
-H 'Content-Type: application/json' \ | |
-u dataplaneapi:password \ | |
'192.168.50.20:5555/v2/services/haproxy/configuration/peer_section?version=1' \ | |
-d '{"name":"lb_cluster"}' |
Then, add entries by using the peer_entries endpoint:
$ curl -X POST \ | |
-H 'Content-Type: application/json' \ | |
-u dataplaneapi:password \ | |
'192.168.50.20:5555/v2/services/haproxy/configuration/peer_entries?version=2&peer_section=lb_cluster' \ | |
-d '{"name":"host1", "address":"192.168.50.20", "port":10000}' |
Enable sync on each stick table by appending a peers
parameter. Here, I update the login_requests stick table you saw in the Stick Tables section, giving it a peers
parameter:
$ curl -X PUT \ | |
-H 'Content-Type: application/json' \ | |
-u dataplaneapi:password \ | |
-d '{"name": "login_requests", "stick_table": {"type": "ip", "size": 1000000, "expire": 600, "store": "http_req_rate(10m)", "peers": "lb_cluster"}}' \ | |
'192.168.50.20:5555/v2/services/haproxy/configuration/backends/login_requests?version=1' |
Using peers is a convenient way to share stick table data between load balancers. However, data transferred from one node will overwrite the data on the other. For example, the HTTP request rate counter on one will overwrite the counter on the other, which is fine for active-passive scenarios.
HAProxy Enterprise supports aggregating stick table data between nodes in an active-active cluster. Learn about the Stick Table Aggregator feature.
Resolvers
With HAProxy, you can target backend servers to load balance using IP addresses or DNS names. A resolvers
section lists the nameservers you want to use for DNS; It allows you to customize DNS resolution in several ways, such as whether to read the server’s resolv.conf
file, how frequently HAProxy should try to resolve a hostname, and how often HAProxy should cache lookups.
Four new endpoints were added:
Method | Endpoint | Description |
GET, POST |
| Add a new resolver or return all of the configured resolvers |
GET, PUT, DELETE |
| Return, update, or delete a single resolvers section |
GET, POST |
| Add a new nameserver or return all of the configured nameservers for an individual resolvers section |
GET, PUT, DELETE |
| Return, update, or delete a single nameserver from a resolvers section |
Use the /services/haproxy/configuration/resolvers
endpoint to add a new resolvers
section. In this example, we add a section named internal_dns and customize its settings:
$ curl -X POST \ | |
-H 'Content-Type: application/json' \ | |
-u dataplaneapi:password \ | |
-d '{"name": "internal_dns", "parse_resolv-conf": false, "accepted_payload_size": 8192}' \ | |
'192.168.50.20:5555/v2/services/haproxy/configuration/resolvers?version=1' |
Then, use the nameservers endpoint to add records to the section:
$ curl -X POST \ | |
-H 'Content-Type: application/json' \ | |
-u dataplaneapi:password \ | |
-d '{"name": "ns1", "address": "10.0.0.1", "port": 53}' \ | |
'192.168.50.20:5555/v2/services/haproxy/configuration/nameservers?version=2&resolver=internal_dns' |
This adds a resolvers
section to your HAProxy configuration that looks like this:
resolvers internal_dns | |
nameserver ns1 10.0.0.1:53 | |
accepted_payload_size 8192 |
Here’s how it looks in practice:
Configure which resolvers
section to use by adding a resolvers
parameter to a server
line in a backend
. Here’s how to do that with the Data Plane API, using the /services/haproxy/configuration/servers/{name}
endpoint:
$ curl -X PUT \ | |
-H 'Content-Type: application/json' \ | |
-u dataplaneapi:password \ | |
-d '{"name": "server1", "address": "server1.site.com", "port": 8080, "check": "enabled", "resolvers": "internal_dns"}' \ | |
'192.168.50.20:5555/v2/services/haproxy/configuration/servers/server1?version=1&backend=be_main' |
More Features Unlocked
Other features have also been added to the Data Plane API to unlock more of the capabilities of the load balancer. Here’s a quick summary:
The
/services/haproxy/runtime/info
endpoint returns HAProxy’s process information, the same as theshow info
command in the HAProxy Runtime API.You can now use the
/services/haproxy/configuration/global
endpoint to set the following directives in the global section:chroot
user
group
ssl_default_server_ciphers
ssl_default_server_options
The
/services/haproxy/configuration/frontends/{name}
endpoint now supports setting the following fields in afrontend:
bind-process
unique-id-format
option logasap
option allbackups
The
/services/haproxy/configuration/defaults
endpoint now supports adding the following fields to adefaults
section:abortonclose
http_reuse
http_check
bind-process
You can use the
/services/haproxy/configuration/servers/{name}
endpoint to set the following fields on a backend server:agent-check
check
port
downinter
fastinter
check-ssl
init-addr
sni
check-sni
proto
resolvers
You can now use the
/services/haproxy/configuration/http_request_rules
endpoint to createhttp-request
directives that use the capture, replace-path, and track-sc actions.The
/services/haproxy/configuration/tcp_request_rules
endpoint now supports alltcp-request
actions. You can see a full list in the HAProxy documentation.
Conclusion
Version 2.0 of the Data Plane API kicks it up a notch, giving you dynamic access to many more of the features within HAProxy. You can now manage map files, stick tables, peers, and resolvers using HTTP RESTful commands. Also, many other smaller, but very important, settings were exposed, meaning that there’s almost nothing that can’t be done with the API!
Learn how HAProxy Enterprise adds enterprise-class features, professional services, and premium support by contacting us or signing up for a free trial! HAProxy Enterprise is the industry-leading software load balancer and powers modern application delivery at any scale and in any environment.
Want to know when we publish the news? Subscribe to this blog! You can also follow us on Twitter and join the conversation on Slack.
Subscribe to our blog. Get the latest release updates, tutorials, and deep-dives from HAProxy experts.