HTTP compression is a capability which allows reducing response size by compressing it. It has multiple benefits:
Compression is usually widely deployed on web servers like Apache, nginx and IIS.
v5.5 and later
The ALOHA Load-Balancer can perform compression on behalf of servers or can compress on the fly responses that should have been compressed by servers (but obviously wasn’t).
The ALOHA dynamically updates its compression rate based on its current load.
The diagram below shows how compression works when performed on the ALOHA load-balancer:
Compression is allowed based on the Accept-Encoding HTTP request header: if no header, no compression.
If backend servers support HTTP compression, then HAProxy will see a compressed response and will let it pass as is. If backend servers do not support HTTP compression and there is an Accept-Encoding header in the request, HAProxy will compress the response on the fly.
When offloading compression is turned on on the ALOHA Load-Balancer, HAProxy removes the Accept-Encoding header from the requests before forwarding it to the backend server in order to prevent backend servers from compressing responses.
HTTP Compression is disabled in HAProxy when:
Currently HAProxy supports gzip compression. Deflate is also supported but should be used in production in any case: its implementation depends on clients and may be broken.
The configuration below applies when you want let the server compress the responses. The ALOHA watches the traffic and will compress anything that could have been not compressed.
The directive below can be added either in the default, frontend or backend section.
The configuration below applies when you want to offload compression from the servers. The ALOHA will remove the Accept-Encoding http request header and will compress responses.
The directive below can be added either in the default section or in a frontend or a backend section.
Such type of configuration can be useful to prevent servers with a weird compression implementation to corrupt responses.