HAProxy provides end-to-end proxying of HTTP/2 traffic. Use HAProxy to route, secure, and observe gRPC traffic over HTTP/2. Read on to learn more.
HAProxy 1.9 introduced the Native HTTP Representation (HTX). Not only does this allow you to use HTTP/2 end-to-end, it also paves the way for HAProxy to support newer versions of HTTP-based technologies and protocols at a faster pace.
Today, with the release of version 1.9.2, we’re excited to announce that HAProxy fully supports gRPC. This moment solidifies the vision we had when creating HTX. The gRPC protocol allows your services to communicate with low latency. HAProxy supports it in ways such as enabling bidirectional streaming of data, parsing and inspecting HTTP headers, and logging gRPC traffic.
HAProxy is known for its high performance, low latency, and flexibility. It provides the building blocks needed to solve a vast array of problems you may encounter quickly and easily. It brings increased observability that can help with troubleshooting, built-in support for ACLs, which can be combined with stick tables, to define rules that will allow you to enable rate limiting for protection against bot threats and application-layer DDoS attacks.
In this blog post, you’ll learn how to set up an example project that uses gRPC and Protocol Buffers to stream messages between a client and a server with HAProxy in between. You’ll learn a bit of the history of how HAProxy came to support HTTP/2 and why it’s such a great choice as a load balancer for gRPC traffic.
The Return of RPC
If you’ve been writing services over the past ten years, you’ve seen the movement away from heavy, remote-procedure-call protocols like SOAP that passed around XML towards lighter, HTTP-friendly paradigms like REST. So complete was the industry’s move away from RPC that entire maturity models (see Richardson Maturity model) were developed that took us further into the land of using HTTP than anyone, I suspect, ever thought possible.
However, somewhere from here to there, we all settled on the notion that JSON was the best (only?) way to transfer data between our services. It made sense. JSON is flexible, easily parsed, and readily deserializes into objects in any given language.
This one-size-fits-all approach led many to implement backend services that communicate by passing JSON messages, even services that only speak among themselves within your own network. Even services that must send and receive a lot of data, or that communicate with half a dozen other services—they all relied on JSON.
In order to support services defined only by a collection of HTTP paths and methods, each with the potential to define how arguments should be sent differently (part of the URL? included in the JSON request?), implementers had to roll their own client libraries—a process that had to be repeated for every programming language used within the organization.
Then, gRPC, an RPC-style framework that uses a unique, binary serialization called Protocol Buffers appeared on the scene. It allowed messages to be passed faster and more efficiently. Data between a client and server can even be streamed continuously. Using Protocol Buffers, gRPC allows client SDKs and service interfaces to be auto-generated. Clearly, the RPC paradigm is back in a big way.
The Case for gRPC
What is gRPC and what problems does it try to solve? Back in 2015, Google open-sourced gRPC, a new framework for connecting distributed programs via remote procedure calls that they’d developed in collaboration with Square and other organizations. Internally, Google had been migrating most of its public-facing services to gRPC already. The framework offered features that were necessary for the scale Google’s services had achieved.
However, gRPC solves problems that the rest of the industry is seeing too. Think about how service-oriented architectures have changed. Initially, a common pattern was a client makes a request to a single backend service, gets a JSON response, then disconnects. Today, applications often decompose business transactions into many more steps. A single transaction may involve communicating with half a dozen services.
The gRPC protocol is an alternative to sending text-based JSON messages over the wire. Instead, it serializes messages using Protocol Buffers, which is transmitted as binary data, making the messages smaller and faster. As you increase the number of your services, reducing latency between them becomes more noticeable and important.
Another change in the industry is the rapid growth of data that services must send and receive. This data might come from always-on IoT devices, rich mobile applications, or even your own logging and metrics collection. The gRPC protocol handles this by using HTTP/2 under the hood in order to enable bidirectional streaming between a client and a service. This allows data to be piped back and forth over a long-lived connection, breaking free of the limitations of the request/response-per-message paradigm.
Protocol Buffers also provides code generation. Using protoc, the Protocol Buffers compiler, you can generate client SDKs and interfaces for your services into a number of programming languages. This makes it easier to keep clients and services in sync and reduces the time writing this boilerplate code yourself.
Similar to how earlier frameworks like SOAP used XML to connect heterogeneous programming languages, gRPC uses Protocol Buffers as a shared, but independent, service description language. With gRPC, interfaces and method stubs are generated from a shared .proto file that contains language-agnostic function signatures. However, the implementation of those functions isn’t directly attached. Clients can, in fact, swap mock services in place of the real implementations to do unit testing or point to a completely different implementation if the need arises.
HAProxy HTTP/2 Support
In order to support gRPC, support for HTTP/2 is required. With the release of HAProxy 1.9, you can load balance HTTP/2 traffic between both the client and HAProxy and also between HAProxy and your backend service. This opens the door to utilizing gRPC as a message passing protocol. At this time, most browsers do not support gRPC. However, tools like the gRPC Gateway can be placed behind HAProxy to translate JSON to gRPC and you can, of course, load balance service-to-service, gRPC communication within your own network.
For the rest of this section, you’ll get to know the history of how HAProxy came to offer these features. Then, we’ll demonstrate an application that uses bidirectional streaming over gRPC.
HTTP/2 between client and proxy
HAProxy added support for HTTP/2 between itself and the client (such as a browser) with the 1.8 release back at the end of 2017. This was a huge win for those using HAProxy because the latency you see is typically happening on the network segments that traverse the Internet between the server and browser. HTTP/2 allows for more efficient transfer of data due to its binary format (as opposed to the human-readable, text-based format of HTTP/1.1), header compression, and multiplexing of message frames over a single TCP connection.
Enabling this in HAProxy is incredibly simple. You simply ensure that you are binding over TLS and add an alpn
parameter to the bind
directive in a frontend
.
frontend fe_mysite | |
bind :443 ssl crt /path/to/cert.pem alpn h2,http/1.1 | |
default_backend be_servers |
If you aren’t familiar with ALPN, here’s a short recap: When using TLS with HTTP/1.1, the convention is to listen on port 443. When HTTP/2 came along, the question became, why reinvent the wheel by listening on a different port than the one with which people are already familiar? However, there had to be a way to tell which version of HTTP the server and client would use. Of course, there could have been an entirely separate handshake that negotiated the protocol, but in the end it was decided to go ahead and encode this information into the TLS handshake, saving a round-trip.
The Application-Layer Protocol Negotiation (ALPN) extension, as described in RFC 7301, updated TLS to support a client and server agreeing on an application protocol. It was created to support HTTP/2 specifically, but will be handy for any other protocols that might need to be negotiated in the future.
ALPN allows a client to send a list of protocols, in preferred order, that it supports as a part of its TLS ClientHello message. The server can then return the protocol that it chooses as a part of its TLS ServerHello message. So, as you can see, being able to communicate which version of HTTP each side supports really does rely on an underlying TLS connection. In a way, it nudges us all towards a more secure web—at least if we want to support both HTTP/1.1 and HTTP/2 on the same port.
Adding HTTP/2 to the backend
After the release of version 1.8, users of HAProxy could already see performance gains simply by switching on HTTP/2 in a frontend. However, protocols like gRPC require that HTTP/2 be used for the backend services as well. The open-source community and engineers at HAProxy Technologies got to work on the problem.
During the process, it became apparent that the time was right to refactor core parts of how HAProxy parses and modifies HTTP messages. An entirely new engine for handling HTTP messages was developed, which was named the Native HTTP Representation, or HTX mode, and released with version 1.9. In HTX mode, HAProxy is able to more easily manipulate any representation of the HTTP protocol. Before you can use HTTP/2 to a backend, you must add option http-use-htx
.
defaults | |
option http-use-htx |
Then, in your backend
section, adding the alpn
parameter to a server
directive enables HAProxy to connect to the origin server using HTTP/2.
backend be_servers | |
balance roundrobin | |
server server1 192.168.3.10:3000 ssl verify none alpn h2,http/1.1 check maxconn 20 |
In the case of gRPC, which requires HTTP/2 and can’t fall back to HTTP/1.1, you can omit http/1.1 altogether. You can also use the proto
parameter instead of alpn
when specifying a single protocol. Here’s an example that uses proto
on the bind
and server
lines:
frontend fe_mysite | |
bind :443 ssl crt /path/to/cert.pem proto h2 | |
default_backend be_servers | |
backend be_servers | |
balance roundrobin | |
server server1 192.168.3.10:3000 ssl verify none proto h2 check maxconn 20 |
When using proto
, enabling TLS via the ssl
parameter becomes optional. When not used, HTTP traffic is transferred in the clear. Note that you can use alpn
in the frontend and proto
in the backend, and vice versa.
You could always do layer 4 proxying
It should be noted that you could always proxy HTTP/2 traffic using transport layer (Layer 4) proxying (e.g. setting mode tcp
). That’s because, in this mode, the data that’s sent over the connection is opaque to HAProxy. The exciting news is the ability, via HTX, to proxy traffic end-to-end at the application layer (Layer 7) when using mode http
.
This means that you can inspect the contents of HTTP/2 messages including headers, the URL, and the request method. You can also set ACL rules to filter traffic or to route it to a specific backend. For example, you might inspect the content-type header to detect gRPC messages and route them specifically.
In the next section, you’ll see an example of proxying gRPC traffic with HAProxy.
HAProxy gRPC Support
Follow along by downloading the sample HAProxy gRPC project from Github. It spins up an environment using Docker Compose. It demonstrates getting a new, random codename from the server (e.g. Bold Badger or Cheerful Coyote). It includes a simple gRPC request/response example and a more complex, bidirectional streaming example, with HAProxy in the middle.
The proto file
First, take a look at the sample/codenamecreator/codenamecreator.proto file. This is a Protocol Buffers file and lists the methods that our gRPC service will support.
syntax = "proto3"; | |
option go_package = "codenamecreator"; | |
message NameRequest { | |
string category = 1; | |
} | |
message NameResult { | |
string name = 1; | |
} | |
service CodenameCreator { | |
rpc GetCodename(NameRequest) returns (NameResult) {} | |
rpc KeepGettingCodenames(stream NameRequest) returns (stream NameResult) {} | |
} |
At the top, we’ve defined a NameRequest message type and a NameResult message type. The former takes a string called category as a parameter and the latter takes a string called name. A service called CodenameCreator is defined that has a function called GetCodename and another called KeepGettingCodenames. In this example project, GetCodename requests a single codename from the server and then exits. KeepGettingCodenames continuously receives codenames from the server in an endless stream.
When defining functions in a .proto file, adding stream
before a parameter or return type makes it streamable, in which case gRPC leaves the connection open and allows requests and/or responses to continue to be sent on the same channel. It’s possible to define gRPC services with no streaming, streaming only from the client, streaming only from the server, and bidirectional streaming.
In order to generate client and server code from this .proto file, you’d use the protoc compiler. Code for different languages, including Golang, Java, C++, and C#, can be generated by downloading the appropriate plugin and passing it to protoc via an argument. In our example, we generate Golang .go files by installing the protoc-gen-go plugin and specifying it using the –go_out parameter. You’ll also need to install Protocol Buffers and the gRPC library for your language. Using the golang:alpine Docker container, the beginning of our client Dockerfile configures the environment like this:
FROM golang:alpine AS build | |
RUN apk add git protobuf | |
RUN go get -u google.golang.org/grpc | |
RUN go get -u github.com/golang/protobuf/protoc-gen-go | |
# Copy files to container | |
WORKDIR /go/src/app | |
COPY . . | |
# Build proto file | |
WORKDIR /go/src/app/codenamecreator | |
RUN protoc --go_out=plugins=grpc:. *.proto |
A separate Dockerfile for our gRPC server is the same up to this point, since it also needs to generate code based off of the same .proto file. A file called codenamecreator.pb.go will be created for you. The rest of each Dockerfile (client and server) build and run the respective Go code that implements and calls the gRPC service.
In the next section, you’ll see how the server and client code is structured.
Server code
Our gRPC service’s server.go file implements the GetCodename function that was defined in the .proto file like this:
type codenameServer struct{} | |
func (s *codenameServer) GetCodename(ctx context.Context, request *creator.NameRequest) (*creator.NameResult, error) { | |
generator := newCodenameGenerator() | |
codename := generator.generate(request.Category) | |
return &creator.NameResult{Name: codename}, nil | |
} |
Here, some custom code is used to generate a new, random codename (not shown, but available in the Github repository) and this is returned as a NameResult
. There’s a lot more going on in the streaming example, KeepGettingCodenames, so suffice it say it implements the interface that was generated in codenamecreator.pb.go:
func (s *codenameServer) KeepGettingCodenames(stream creator.CodenameCreator_KeepGettingCodenamesServer) error { | |
// server implementation | |
} |
To give you an idea, the server calls stream.Send
to send data down the channel. In a separate goroutine, it calls stream.Recv()
to receive messages from the client using the same stream
object. The server begins listening for connections on port 3000. You’re able to use transport-layer security by providing a TLS public certificate and private key when creating the gRPC server, as shown:
address := ":3000" | |
crt := "server.crt" | |
key := "server.key" | |
lis, err := net.Listen("tcp", address) | |
if err != nil { | |
log.Fatalf("Failed to listen: %v", err) | |
} | |
creds, err := credentials.NewServerTLSFromFile(crt, key) | |
if err != nil { | |
log.Fatalf("Failed to load TLS keys") | |
} | |
grpcServer := grpc.NewServer(grpc.Creds(creds)) |
HAProxy is able to verify the server’s certificate by adding ca-file /path/to/server.crt
to the backend server
line. You can also disable TLS by calling grpc.NewServer
without any arguments.
Client code
The protoc compiler generates a Golang interface that your service implements, as well as a client SDK that you’d use to invoke the service functions from the client. In the case of Golang, all of this is included within the single, generated .go file. You then write code that consumes this SDK.
The client configures a secure connection to the server by passing its address into the grpc.Dial
function. In order for it to use TLS to the server, it must be able to verify the server’s public key certificate using the grpc.WithTransportCredentials
function:
address := os.Getenv("SERVER_ADDRESS") // haproxy URL | |
crt := os.Getenv("TLS_CERT") // haproxy.crt | |
creds, err := credentials.NewClientTLSFromFile(crt, "") | |
if err != nil { | |
log.Fatalf("Failed to load TLS certificate") | |
} | |
conn, err := grpc.Dial(address, grpc.WithTransportCredentials(creds)) |
Since HAProxy sits between the client and server, the address should be the load balancer’s and the public key should be the certificate portion of the .pem file specified on the bind
line in the HAProxy frontend
. You can also choose to not use TLS at all and pass grpc.WithInsecure()
as the second argument to grpc.Dial
. In that case, you would change your HAProxy configuration to listen without TLS and use the proto
argument to specify HTTP/2:
bind :3001 proto h2 |
The client.go file is able to call GetCodename
and KeepGettingCodenames
as though they were implemented in the same code. That’s the power of RPC services.
client := creator.NewCodenameCreatorClient(conn) | |
ctx := context.Background() | |
// simple, unary function call | |
result, err := client.GetCodename(ctx, &creator.NameRequest{Category: category}) | |
// stream example, keeps connection open | |
fmt.Println("Generating codenames...") | |
stream, err := client.KeepGettingCodenames(ctx) |
When calling a gRPC function that isn’t using streams, as with GetCodename
, the function simply returns the result from the server and exits. This is probably how most of your services will operate.
For the streaming example, the client calls KeepGettingCodenames
to get a stream object. From there, stream.Recv()
is called in an endless loop to receive data from the server. At the same time, it calls stream.Send
to send data back—in this case, a new category such as Science—to the server every ten seconds. In this way, both the client and server are sending and receiving data in parallel over the same connection.
On the client-side, you’ll see new, random codenames displayed:
Every ten seconds, the server will show that the client has requested a different category:
2019/01/15 14:39:36 ---Updating codename category to: Science--- |
In the next section, you’ll see how to configure HAProxy to proxy gRPC traffic at Layer 7.
HAProxy configuration
The HAProxy configuration for gRPC is really just an HTTP/2-compatible configuration.
global | |
log stdout local0 | |
maxconn 50000 | |
ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS | |
ssl-default-bind-options ssl-min-ver TLSv1.1 | |
defaults | |
log global | |
maxconn 3000 | |
mode http | |
timeout connect 10s | |
timeout client 30s | |
timeout server 30s | |
option httplog | |
option logasap | |
option http-use-htx | |
frontend fe_proxy | |
bind :3001 ssl crt /path/to/cert.pem alpn h2 | |
default_backend be_servers | |
backend be_servers | |
balance roundrobin | |
server server1 server:3000 check maxconn 20 ssl alpn h2 ca-file /usr/local/etc/haproxy/pem/server.crt |
Within the frontend
, the bind
line uses the alpn
parameter (or proto
) to specify that HTTP/2 (h2) is supported. Likewise, an alpn
parameter is added to the server
line in the backend
, giving you end-to-end HTTP/2. Note that option http-use-htx
is necessary to make this work.
There are a few other caveats to note. The first is that when streaming data bidirectionally between the client and server, because HAProxy defaults to only logging the traffic when the full request/response transaction has completed, you should use option logasap
to tell HAProxy to log the connection right away. It will log a message at the start of the request:
<134>Jan 15 14:38:46 haproxy[8]: 172.28.0.4:34366 [15/Jan/2019:14:38:46.988] fe_proxy~ be_servers/server1 0/0/2/0/+2 200 +79 - - ---- 1/1/1/1/0 0/0 "POST /CodenameCreator/KeepGettingCodenames HTTP/2.0" |
You can also add debug
to the global
section to enable debug logging. Then you’ll see all of the HTTP/2 headers from the request and response.
POST /CodenameCreator/KeepGettingCodenames HTTP/2.0 | |
content-type: application/grpc | |
user-agent: grpc-go/1.18.0-dev | |
te: trailers | |
host: haproxy:3001 | |
HTTP/2.0 200 | |
content-type: application/grpc |
When streaming data from the client to the server, be sure not to set option http-buffer-request
. This would pause HAProxy until it receives the full request body, which, when streaming, will be a long time in coming.
Inspecting headers and URL paths
To demonstrate some of the Layer 7 features of proxying gRPC traffic, consider the need to route traffic based on the application protocol. You might, for example, want to use the same frontend
to serve both gRPC and non-gRPC traffic, sending each to the appropriate backend
. You’d use an acl statement to determine the type of traffic and then choose the backend with use_backend
, like so:
frontend fe_proxy | |
bind :3001 ssl crt /path/to/cert.pem alpn h2 | |
acl isgrpc req.hdr(content-type) -m str "application/grpc" | |
use_backend grp_servers if isgrpc | |
default_backend be_servers |
Another use for inspecting headers is the ability to operate on metadata. Metadata is extra information that you can include with a request. You might utilize it to send a JWT access token or a secret passphrase, denying all requests that don’t contain it or performing more complex checks. When sending metadata from your client, your gRPC code will look like this (where the metadata package is google.golang.org/grpc/metadata):
client := creator.NewCodenameCreatorClient(conn) | |
ctx := context.Background() | |
// Add some metadata to the context | |
ctx = metadata.AppendToOutgoingContext(ctx, "mysecretpassphrase", "abc123") |
Here’s an example that uses http-request deny
to refuse any requests that don’t send the secret passphrase:
frontend fe_proxy | |
bind :3001 ssl crt /path/to/cert.pem alpn h2 | |
http-request deny unless { req.hdr(mysecretpassphrase) -m str "abc123" } | |
default_backend be_servers |
You can also record metadata in the HAProxy logs by adding a capture request header
line to the frontend
, like so:
capture request header mysecretpassphrase len 100 |
The mysecretpassphrase header will be added to the log, surrounded by curly braces:
<134>Jan 15 15:48:44 haproxy[8]: 172.30.0.4:35052 [15/Jan/2019:15:48:44.775] fe_proxy~ be_servers/server1 0/0/1/0/+1 200 +79 - - ---- 1/1/1/1/0 0/0 {abc123} "POST /CodenameCreator/KeepGettingCodenames HTTP/2.0" |
HAProxy can also route to a different backend based upon the URL path. In gRPC, the path is a combination of the service name and function. Knowing that, you can declare an ACL rule that matches the expected path, /CodenameCreator/KeepGettingCodenames, and route traffic accordingly, as in this example:
frontend fe_proxy | |
bind :3001 ssl crt /path/to/cert.pem alpn h2 | |
acl is_codename_path path /CodenameCreator/KeepGettingCodenames | |
acl is_otherservice_path path /AnotherService/SomeFunction | |
use_backend be_codenameservers if is_codename_path | |
use_backend be_otherservers if is_otherservice_path | |
default_backend be_servers |
Conclusion
In this blog post, you learned how HAProxy provides full support for HTTP/2, which enables you to use gRPC for communicating between services. You can use HAProxy to route gRPC requests to the appropriate backend, load balance equally among servers, enforce security checks based on HTTP headers and gRPC metadata, and get observability into the traffic.
Want to stay up to date on the latest HAProxy news? Subscribe to this blog! You can also follow us on Twitter and join the conversation on Slack.
HAProxy Enterprise offers a suite of extra security-related modules and expert support. Contact us to learn more and get your HAProxy Enterprise free trial.
Subscribe to our blog. Get the latest release updates, tutorials, and deep-dives from HAProxy experts.