We have updated this guide to include a solution to also load balances the Unified Access Gateway (UAG) with dedicated health checking. This ensures that if UAG fails a deep health check, its respective servers are instantly marked "down" across all protocols (TCP and UDP) simultaneously.
If you’ve worked with VMware Horizon (now Omnissa Horizon), you know it’s a common way for enterprise users to connect to remote desktops. But for IT engineers and DevOps teams? It’s a whole different story. Horizon’s custom protocols and complex connection requirements make load balancing a bit tricky.
With its recent sale to Omnissa, the technology hasn’t changed—but neither has the headache of managing it effectively. Let’s break down the problem and explain why Horizon can be such a beast to work with… and how HAProxy can help.
What Is Omnissa Horizon?
Horizon is a remote desktop solution that provides users with secure access to their desktops and applications from virtually anywhere. It is known for its performance, flexibility, and enterprise-level capabilities. Here’s how a typical Horizon session works:
Client Authentication: The client initiates a TCP connection to the server for authentication.
Server Response: The server responds with details about which backend server the client should connect to.
Session Establishment: The client establishes one TCP connection and two UDP connections to the designated backend server.
The problem? In order to maintain session integrity, all three connections must be routed to the same backend server. But Horizon’s protocol doesn’t make this easy. The custom protocol relies on a mix of TCP and UDP, which have fundamentally different characteristics, creating unique challenges for load balancing.
Why Load Balancing Omnissa Horizon Is So Difficult
The Multi-Connection Challenge
Since these connections belong to the same client session, they must route to the same backend server. A single misrouted connection can disrupt the entire session. For a load balancer, this is easier said than done.
The Problem with UDP
UDP is stateless, which means it doesn’t maintain any session information between the client and server. This is in stark contrast to TCP, which ensures state through its connection-oriented protocol. Horizon’s use of UDP complicates things further because:
There’s no built-in mechanism to track sessions.
Load balancers can’t use traditional stateful methods to ensure all connections from a client go to the same server.
Maintaining session stickiness for UDP typically requires workarounds that add complexity (like an external data source).
Traditional Load Balancing Falls Short
Most load balancers rely on session stickiness (or affinity) to route traffic consistently. In TCP, this is often achieved with in-memory client-server mappings, such as with HAProxy's stick tables feature. However, since UDP is stateless and doesn't track sessions like TCP does, stick tables do not support UDP. Keeping everything coordinated without explicit session tracking feels like solving a puzzle without all the pieces—and that’s where the frustration starts.
This is why Omnissa (VMWare) suggests using their “Unified Access Gateway” (UAG) appliance to handle the connections. While this makes one problem easier, it adds another layer of cost and complexity to your network. While you may need the UAG for a more comprehensive solution for Omnissa products, it would be great if there was a simpler, cleaner, and more efficient solution.
This leaves engineers with a critical question: How do you achieve session stickiness for a stateless protocol? This is where HAProxy offers an elegant solution.
Enter HAProxy: A Stateless Approach to Stickiness
HAProxy’s balance-source algorithm is the key to solving the Horizon multi-protocol challenge. This approach uses consistent hashing to achieve session stickiness without relying on stateful mechanisms like stick tables. From the documentation:
“The source IP address is hashed and divided by the total weight of the running servers to designate which server will receive the request. This ensures that the same client IP address will always reach the same server as long as no server goes down or up.”
Here’s how it works:
Hashing Client IP: HAProxy computes a hash of the client’s source IP address.
Mapping to Backend Servers: The hash is mapped to a specific backend server in the pool.
Consistency Across Connections: The same client IP will always map to the same backend server.
This deterministic, stateless approach ensures that all connections from a client—whether TCP or UDP—are routed to the same server, preserving session integrity.
Why Stateless Stickiness Works
The beauty of HAProxy’s solution lies in its simplicity and efficiency—it has low overhead, works for both protocols and is tolerant to changes. Changes to the server pool may cause the connections to rebalance, but those clients will be redirected consistently as noted in the documentation:
“If the hash result changes due to the number of running servers changing, many clients will be directed to a different server.”
It is super efficient because there is no need for in-memory storage or synchronization between load balancers. The same algorithm works seamlessly for both TCP and UDP.
This stateless method doesn’t just solve the problem; it does so elegantly, reducing complexity and improving reliability.
Implementing HAProxy for Omnissa Horizon
While the configuration is relatively straightforward, we will need the HAProxy Enterprise UDP Module to provide UDP load balancing. This module is included in HAProxy Enterprise, which adds additional enterprise functionality and ultra-low-latency security layers on top of our open-source core.
HAProxy Enterprise provides high-performance load balancing for TCP, UDP, QUIC, and HTTP-based applications, high availability, an API/AI gateway, Kubernetes application routing, SSL processing, DDoS protection, bot management, global rate limiting, and a next-generation WAF.
HAProxy Enterprise combines the performance, reliability, and flexibility of our open-source core (HAProxy – the most widely used software load balancer) with ultra-low-latency security layers and world-class support.
Implementation Overview
So, how easy is it to implement? Just a few lines of configuration will get you what you need. You start by defining your frontend and backend, and then add the “magic”:
Define Your Frontend and Backend: The
frontendsection handles incoming connections, while thebackenddefines how traffic is distributed to servers.Enable Balance Source: The
balance sourcedirective ensures that HAProxy computes a hash of the client’s IP and maps it to a backend server.Optimize Health Checks: Include the
checkkeyword for backend servers to enable health checks. This ensures that only healthy servers receive traffic.UDP Load Balancing: The UDP module in the enterprise edition is necessary for UDP load balancing, and uses the
udp-lbkeyword.
Here’s what a basic configuration might look like for the custom “Blast” protocol:
| # --- FRONTEND CONFIGURATION --- | |
| frontend ft_horizon_tcp_blast | |
| bind *:8443 | |
| default_backend bk_horizon_tcp_blast | |
| # --- BACKEND CONFIGURATION --- | |
| backend bk_horizon_tcp_blast | |
| server srv1 192.168.1.101:22443 check | |
| server srv2 192.168.1.102:22443 check | |
| balance source | |
| # UDP Load Balancing | |
| udp-lb horizon_udp_blast | |
| dgram-bind *:22443 | |
| server srv1 192.168.1.101:22443 check | |
| server srv2 192.168.1.102:22443 check | |
| balance source | |
| udp-lb horizon_udp_pcoip | |
| dgram-bind *:4172 | |
| server srv1 192.168.1.101:4172 check | |
| server srv2 192.168.1.102:4172 check | |
| balance source |
This setup ensures that all incoming connections—whether TCP or UDP—are mapped to the same backend server based on the client’s IP address. The hash-type consistent option minimizes disruption during server pool changes.
This approach is elegant in its simplicity. We use minimal configuration, but we still get a solid approach to session stickiness. It is also incredibly performant, keeping memory usage and CPU demands low. Best of all, it is highly reliable, with consistent hashing ensuring stable session persistence, even when servers are added or removed.
Refined health tracking & balancing UAG
While the basic configuration above works well, there are a few refinements and adjustments that can be added for a more comprehensive solution. In production-grade Omnissa Horizon environments, HAProxy is typically deployed in front of Unified Access Gateways (UAGs) rather than directly in front of internal Connection Servers.
This architecture places HAProxy at the edge to manage incoming external traffic before it enters the DMZ, ensuring that UAGs (which act as hardened proxies for internal VDI operations) remain secure and performant. There are a few key refinements we can add for this production-ready setup:
Synchronized health tracking
While basic port checks verify network connectivity, they do not guarantee that the underlying Horizon application services are healthy. To solve this, use a dedicated health check backend like be_uag_https that specifically targets the /favicon.ico path. HAProxy can verify that all relevant UAG and Connection Server services are fully functional, not just that the port is open.
Long-lived session persistence
Omnissa Horizon sessions are notably long-lived, with a default maximum duration of 10 hours. Standard load balancer timeouts are often too aggressive, potentially severing active virtual desktop connections during a typical workday. To ensure stability, HAProxy can be configured with extended timeout server and timeout client settings of 10 hours for all Blast and PCoIP backends. This aligns the load balancer’s persistence with the application’s session lifecycle, ensuring that even if a user is momentarily idle, their secondary protocols remain pinned to the correct UAG node.
Edge security and SSL bridging
For external-facing deployments, HAProxy should serve as the first line of defense using advanced security features like WAF (Web Application Firewall) and Brute Force Detection on the initial authentication endpoints. This protects the environment from credential-stuffing and application-layer attacks before they ever reach the UAG.
Furthermore, because UAGs require end-to-end encryption for security, HAProxy should be configured for SSL Bridging. It is important to use the same SSL certificate on both the HAProxy virtual service and the UAG nodes.
This is crucial because the UAGs use fingerprinting for the certificate used for incoming requests, meaning the certificate presented by the HAProxy load balancer and the certificate on the UAG's outside interface must be the same to prevent certificate mismatch errors during the session handoff between the primary authentication and secondary display protocols.
Sample configuration with UAG load balancing & advanced health tracking
In this refined setup, the be_uag_https backend does the heavy lifting. All other backends simply "watch" its status. See the Omnissa documentation for a full list of port requirements for the different services within Unified Access Gateway.
| # --- FRONTEND CONFIGURATION --- | |
| frontend ft_horizon_tcp_blast | |
| # Blast protocol is tunneled on UAGs, running on port 8443 (external) vs. internal. | |
| bind *:8443 | |
| default_backend bk_horizon_tcp_blast | |
| # --- BACKEND CONFIGURATION --- | |
| # 1. Dedicated Health Check Backend (The Source of Truth) | |
| backend be_uag_https | |
| mode http | |
| option httpchk HEAD /favicon.ico | |
| http-check expect status 200 | |
| # Note: The 'inter 30000' here controls the frequency for all tracking servers | |
| default-server fall 3 inter 30000 rise 2 | |
| server srv1 192.168.1.101:443 check ssl verify none | |
| server srv2 192.168.1.102:443 check ssl verify none | |
| # 2. Standard TCP Backend (Tracking Health) | |
| backend bk_horizon_tcp_blast | |
| balance source | |
| hash-type consistent | |
| timeout server 36000000 # 10 Hour Timeout | |
| server srv1 192.168.1.101:22443 track be_uag_https/srv1 | |
| server srv2 192.168.1.102:22443 track be_uag_https/srv2 | |
| # 3. UDP Load Balancing (Tracking Health) | |
| udp-lb horizon_udp_blast | |
| dgram-bind *:22443 | |
| balance source | |
| server srv1 192.168.1.101:22443 track be_uag_https/srv1 | |
| server srv2 192.168.1.102:22443 track be_uag_https/srv2 | |
| udp-lb horizon_udp_pcoip | |
| dgram-bind *:4172 | |
| balance source | |
| timeout server 36000000 | |
| timeout client 36000000 | |
| server srv1 192.168.1.101:4172 track be_uag_https/srv1 | |
| server srv2 192.168.1.102:4172 track be_uag_https/srv2 |
Understanding the track Directive and Timing
When you use the track keyword, the secondary servers inherit the state of the target. They don’t send their own health check packets, this enables further synchronicity: If srv1 fails the favicon check, it is marked down for Blast TCP, Blast UDP, and PCoIP UDP at the exact same millisecond.
This prevents the "zombie session" issue. Without tracking, a user might be connected via TCP while their UDP media stream is hitting a dead server.
This centralized tracking approach transforms your health checks from a series of fragmented probes into a unified "source of truth" for your infrastructure. By anchoring every protocol to a single HTTP health check, you eliminate the risk of partial failures. A server that appears healthy for UDP while its TCP services are actually failing can't happen, and the client's entire session remains synchronized.
It's a configuration that's both more robust and significantly lighter on your backend resources, providing the stability required for high-performance virtual desktop environments.
Advanced Options in HAProxy 3.0+
HAProxy 3.0 introduced enhancements that make this approach even better. It offers more granular control over hashing, allowing you to specify the hash key (e.g., source IP or source+port). This is particularly useful for scenarios where IP addresses may overlap or when the list of servers is in a different order.
We can also include hash-balance-factor, which will help keep any individual server from being overloaded. From the documentation:
“Specifying a "hash-balance-factor" for a server with "hash-type consistent" enables an algorithm that prevents any one server from getting too many requests at once, even if some hash buckets receive many more requests than others.
[...]
If the first-choice server is disqualified, the algorithm will choose another server based on the request hash, until a server with additional capacity is found.”
Finally, we can adjust the hash function to be used for the hash-type consistent option. This defaults to sdbm, but there are 4 functions and an optional none if you want to manually hash it yourself. See the documentation for details on these functions.
Sample configuration using advanced options:
| backend bk_horizon_tcp_blast | |
| balance source | |
| hash-type consistent sdbm | |
| hash-key addr-port | |
| hash-balance-factor 150 | |
| # Still tracking our central health backend server srv1 | |
| 192.168.1.101:22443 track be_uag_health/srv1 server srv2 | |
| 192.168.1.102:22443 track be_uag_health/srv2 |
These features improve flexibility and reduce the risk of uneven traffic distribution across backend servers.
Coordination Without Coordination
The genius of HAProxy’s solution lies in its stateless state. By relying on consistent algorithms, it achieves an elegant solution that many would assume requires complex session tracking or external databases. This approach is not only efficient but also scalable.
The result? A system that feels like it’s maintaining state without actually doing so. It’s like a magician revealing their trick—it’s simpler than it looks, but still impressive.
Understanding Omnissa Horizon’s challenges is half the battle. Implementing a solution can be surprisingly straightforward with HAProxy. You can ensure reliable load balancing for even the most complex protocols by leveraging stateless stickiness through consistent hashing.
This setup not only solves the Horizon problem but also demonstrates the power of HAProxy as a versatile tool for DevOps and IT engineers. Whether you’re managing legacy applications or cutting-edge deployments, HAProxy has the features to make your life easier.
Frequently asked questions (FAQs)
Stick tables work well for TCP but aren’t compatible with Horizon’s UDP requirements. Since UDP is stateless, stick tables can’t track sessions effectively across multiple protocols.
Resources
Blog post: "Omnissa Horizon alternative: How HAProxy solves UDP load balancing"
Blog post: “Client IP Persistence or Source IP Hash Load Balancing”
Blog post: “Introducing HAProxy 3.0”
Blog post: “Load Balancing RADIUS with HAPRoxy Enterprise UDP Module”
Documentation: Balance Source
Documentation: Consistent Hashing
Documentation: Health Checks
Documentation: Hash Key
Documentation: Hash Balance Factor