Integrations

Consul service mesh

HashiCorp Consul operates as a service mesh when you enable its Connect mode. In this mode, Consul agents integrate with HAProxy Enterprise to form an interconnected web of proxies. Whenever one of your services needs to call another, their communication is relayed through the web, or mesh, with HAProxy Enterprise nodes passing messages between all services.

An HAProxy Enterprise node exists next to each of your services on both the caller and callee end. When a caller makes a request, they direct it to localhost where HAProxy Enterprise is listening. HAProxy Enterprise then relays it transparently to the remote callee. From the caller’s perspective, all services appear to be local, which simplifies the service’s configuration.

Consul service mesh diagram

Deploy in Kubernetes Jump to heading

This section describes how to deploy the Consul service mesh with HAProxy Enterprise in Kubernetes.

Deploy the Consul servers Jump to heading

Consul agents running in server mode watch over the cluster and send service discovery information to each Consul client in the service mesh.

  1. Deploy the Consul server nodes. In Kubernetes, you can install the Consul Helm chart.

    nix
    helm repo add hashicorp https://helm.releases.hashicorp.com
    helm repo update
    helm install consul hashicorp/consul \
    --set global.name=consul \
    --set connect=true
    nix
    helm repo add hashicorp https://helm.releases.hashicorp.com
    helm repo update
    helm install consul hashicorp/consul \
    --set global.name=consul \
    --set connect=true

    If you are using a single-node Kubernetes cluster, such as minikube, then set the server.replicas and server.bootstrapExpect flags, as described in the guide Consul Service Discovery and Mesh on Minikube.

    nix
    helm install consul hashicorp/consul \
    --set global.name=consul \
    --set connect=true \
    --set server.replicas=1 \
    --set server.bootstrapExpect=1
    nix
    helm install consul hashicorp/consul \
    --set global.name=consul \
    --set connect=true \
    --set server.replicas=1 \
    --set server.bootstrapExpect=1
  2. Create a file named pod-reader-role.yaml and add the following contents to it.

    This creates a Role and RoleBinding resource in your Kubernetes cluster that grant permissions to the Consul agents to read pod labels.

    pod-reader-role.yaml
    yaml
    kind: Role
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
    namespace: default
    name: pod-reader
    rules:
    - apiGroups: [""]
    resources: ["pods"]
    verbs: ["get", "watch", "list"]
    ---
    kind: RoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
    name: read-pods
    namespace: default
    subjects:
    - kind: User
    name: system:serviceaccount:default:default
    apiGroup: rbac.authorization.k8s.io
    roleRef:
    kind: Role
    name: pod-reader
    apiGroup: rbac.authorization.k8s.io
    pod-reader-role.yaml
    yaml
    kind: Role
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
    namespace: default
    name: pod-reader
    rules:
    - apiGroups: [""]
    resources: ["pods"]
    verbs: ["get", "watch", "list"]
    ---
    kind: RoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
    name: read-pods
    namespace: default
    subjects:
    - kind: User
    name: system:serviceaccount:default:default
    apiGroup: rbac.authorization.k8s.io
    roleRef:
    kind: Role
    name: pod-reader
    apiGroup: rbac.authorization.k8s.io
  3. Deploy it with kubectl apply:

    nix
    kubectl apply -f pod-reader-role.yaml
    nix
    kubectl apply -f pod-reader-role.yaml

Deploy your application Jump to heading

For each service that you want to include in the service mesh, you must deploy two extra containers into the same pod.

  • container 1: your application
  • container 2: Consul agent, consul
  • container 3: HAProxy-Consul connector, hapee-plus-registry.haproxy.com/hapee-consul-connect

The three containers (application, Consul, Consul-HAProxy Enterprise connector) are defined inside a single pod.

  1. Use kubectl create secret to store your credentials for the private HAProxy Docker registry, replacing <KEY> with your HAProxy Enterprise license key. You will pull the hapee-consul-connect container image from this registry.

    nix
    kubectl create secret docker-registry regcred \
    --namespace=default \
    --docker-server=hapee-plus-registry.haproxy.com \
    --docker-username=<KEY> \
    --docker-password=<KEY>
    nix
    kubectl create secret docker-registry regcred \
    --namespace=default \
    --docker-server=hapee-plus-registry.haproxy.com \
    --docker-username=<KEY> \
    --docker-password=<KEY>
  2. Add the haproxy-enterprise-consul and consul containers to each of your Kubernetes Deployment manifests. In the example below, we deploy these two containers inside the same pod as a service named example-service.

    example-deployment.yaml
    yaml
    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: example-service
    labels:
    app: example-service
    spec:
    replicas: 1
    selector:
    matchLabels:
    app: example-service
    template:
    metadata:
    labels:
    app: example-service
    spec:
    imagePullSecrets:
    - name: regcred
    containers:
    - name: example-service
    image: jmalloc/echo-server
    - name: haproxy-enterprise-consul
    image: hapee-plus-registry.haproxy.com/hapee-consul-connect
    args:
    - -sidecar-for=example-service
    - -enable-intentions
    - name: consul
    image: consul
    env:
    - name: CONSUL_LOCAL_CONFIG
    value: '{
    "service": {
    "name": "example-service",
    "port": 80,
    "connect": {
    "sidecar_service": {}
    }
    }
    }'
    args: ["agent", "-bind=0.0.0.0", "-retry-join=provider=k8s label_selector=\"app=consul\""]
    example-deployment.yaml
    yaml
    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: example-service
    labels:
    app: example-service
    spec:
    replicas: 1
    selector:
    matchLabels:
    app: example-service
    template:
    metadata:
    labels:
    app: example-service
    spec:
    imagePullSecrets:
    - name: regcred
    containers:
    - name: example-service
    image: jmalloc/echo-server
    - name: haproxy-enterprise-consul
    image: hapee-plus-registry.haproxy.com/hapee-consul-connect
    args:
    - -sidecar-for=example-service
    - -enable-intentions
    - name: consul
    image: consul
    env:
    - name: CONSUL_LOCAL_CONFIG
    value: '{
    "service": {
    "name": "example-service",
    "port": 80,
    "connect": {
    "sidecar_service": {}
    }
    }
    }'
    args: ["agent", "-bind=0.0.0.0", "-retry-join=provider=k8s label_selector=\"app=consul\""]

    Note the following arguments for the haproxy-enterprise-consul container:

    Argument Description
    -sidecar-for example-service Indicates the name of the service for which to create an HAProxy Enterprise proxy.
    -enable-intentions Enables Consul intentions, which HAProxy Enterprise enforces.

    Note the following arguments for the consul container:

    Argument Description
    agent Runs the Consul agent.
    -bind=0.0.0.0 The address that should be bound to for internal cluster communications.
    -retry-join=provider=k8s label_selector="app=consul" Similar to -join, which specifies the address of another agent to join upon starting up (typically one of the Consul server agents), but allows retrying a join until it is successful. In Kubernetes, you set this to provider=k8s and then include a label selector for finding the Consul servers. The Consul Helm chart adds the label app=consul to the Consul server pods.

    We’ve registered the example-service with the Consul service mesh by setting an environment variable named CONSUL_LOCAL_CONFIG in the Consul container. This defines the Consul configuration and registraton for the service. It indicates that the service receives requests on port 80.

    json
    {
    "service": {
    "name": "example-service",
    "port": 80,
    "connect": {
    "sidecar_service": {}
    }
    }
    json
    {
    "service": {
    "name": "example-service",
    "port": 80,
    "connect": {
    "sidecar_service": {}
    }
    }
  3. Deploy it with kubectl apply:

    nix
    kubectl apply -f example-service.yaml
    nix
    kubectl apply -f example-service.yaml

Deploy a second application that calls the other Jump to heading

The example-service from the previous section is published to the service mesh where other services within the mesh can call it. To define a service that calls another, add a proxy section to the connect.sidecar_service section of the Consul container’s configuration.

In the example below, the service named app-ui adds the example-service as an upstream service, which makes it available at localhost at port 3000 inside the pod.

app-ui-deployment.yaml
yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-ui
labels:
app: app-ui
spec:
replicas: 1
selector:
matchLabels:
app: app-ui
template:
metadata:
labels:
app: app-ui
spec:
containers:
- name: app-ui
image: jmalloc/echo-server
- name: haproxy-enterprise-consul
image: hapee-plus-registry.haproxy.com/hapee-consul-connect
args:
- -sidecar-for=app-ui
- -enable-intentions
- name: consul
image: consul
env:
- name: CONSUL_LOCAL_CONFIG
value: '{
"service": {
"name": "app-ui",
"port": 80,
"connect": {
"sidecar_service": {
"proxy": {
"upstreams": [
{
"destination_name": "example-service",
"local_bind_port": 3000
}
]
}
}
}
}
}'
args: ["agent", "-bind=0.0.0.0", "-retry-join=provider=k8s label_selector=\"app=consul\""]
app-ui-deployment.yaml
yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-ui
labels:
app: app-ui
spec:
replicas: 1
selector:
matchLabels:
app: app-ui
template:
metadata:
labels:
app: app-ui
spec:
containers:
- name: app-ui
image: jmalloc/echo-server
- name: haproxy-enterprise-consul
image: hapee-plus-registry.haproxy.com/hapee-consul-connect
args:
- -sidecar-for=app-ui
- -enable-intentions
- name: consul
image: consul
env:
- name: CONSUL_LOCAL_CONFIG
value: '{
"service": {
"name": "app-ui",
"port": 80,
"connect": {
"sidecar_service": {
"proxy": {
"upstreams": [
{
"destination_name": "example-service",
"local_bind_port": 3000
}
]
}
}
}
}
}'
args: ["agent", "-bind=0.0.0.0", "-retry-join=provider=k8s label_selector=\"app=consul\""]

Note that we set the environment variable named CONSUL_LOCAL_CONFIG in the Consul container to register this service with the service mesh. It declares that it has an upstream dependency on the example-service service.

Optional: Publish the web dashboard Jump to heading

The Helm chart creates a Kubernetes service named consul-server that exposes a web dashboard on port 8500. To make it available outside of the Kubernetes cluster, you can forward the port via the HAProxy Enterprise Kubernetes Ingress Controller:

  1. Deploy the HAProxy Enterprise Kubernetes Ingress Controller into your Kubernetes cluster.

  2. Create a file named consul-server-ingress.yaml that defines an Ingress resource for the Consul service.

    In this example, we define a host-based rule that routes all requests for consul.test.local to the consul-server service at port 8500.

    consul-server-ingress.yaml
    yaml
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
    name: consul-server-ingress
    spec:
    rules:
    - host: consul.test.local
    http:
    paths:
    - path: "/"
    pathType: Prefix
    backend:
    service:
    name: consul-server
    port:
    number: 8500
    consul-server-ingress.yaml
    yaml
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
    name: consul-server-ingress
    spec:
    rules:
    - host: consul.test.local
    http:
    paths:
    - path: "/"
    pathType: Prefix
    backend:
    service:
    name: consul-server
    port:
    number: 8500
  3. Deploy it using kubectl apply:

    nix
    kubectl apply -f consul-server-ingress.yaml
    nix
    kubectl apply -f consul-server-ingress.yaml
  4. Add an entry to your system’s /etc/hosts file that maps the consul.test.local hostname to the IP address of your Kubernetes cluster. If you are using minikube, you can get the IP address of the node with minikube ip. Below is an example /etc/hosts file:

    text
    192.168.99.125 consul.test.local
    text
    192.168.99.125 consul.test.local
  5. Use kubectl get service to check which port the ingress controller has mapped to port 80. In the example below, port 80 is mapped to port 30624.

    nix
    minikube ip
    nix
    minikube ip
    output
    text
    192.168.99.120
    output
    text
    192.168.99.120
    nix
    kubectl get service kubernetes-ingress
    nix
    kubectl get service kubernetes-ingress
    output
    text
    NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    kubernetes-ingress NodePort 10.110.104.60 <none> 80:30624/TCP,443:31147/TCP,1024:31940/TCP 7m40s
    output
    text
    NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    kubernetes-ingress NodePort 10.110.104.60 <none> 80:30624/TCP,443:31147/TCP,1024:31940/TCP 7m40s

    Open a browser window and go to the consul.test.local address at that port, e.g. consul.test.local:30624.

Optional: Enable Consul ACLs Jump to heading

In Consul, ACLs are a security measure that requires Consul agents to present an authentication token before they can join the cluster or call API methods.

  1. When installing Consul, set the global.acls.manageSystemACLs flag to true to enable ACLs.

    nix
    helm install consul hashicorp/consul \
    --set global.name=consul \
    --set connect=true \
    --set global.acls.manageSystemACLs=true
    nix
    helm install consul hashicorp/consul \
    --set global.name=consul \
    --set connect=true \
    --set global.acls.manageSystemACLs=true
  2. Install jq, a command-line utility for processing JSON data. It provides a simple and powerful way to filter, format, and transform JSON data structures.

    nix
    sudo apt install jq
    nix
    sudo apt install jq
    nix
    sudo yum install jq
    nix
    sudo yum install jq
  3. Use kubectl get secret to get the auto-generated bootstrap token, which is base64 encoded.

    nix
    kubectl get secret consul-bootstrap-acl-token -o json | jq -r '.data.token' | base64 -d
    nix
    kubectl get secret consul-bootstrap-acl-token -o json | jq -r '.data.token' | base64 -d
    output
    text
    8f1c8c5e-d0fb-82ff-06f4-a4418be245dc
    output
    text
    8f1c8c5e-d0fb-82ff-06f4-a4418be245dc

    Use this token to log into the Consul web UI.

  4. In the Consul web UI, go to ACL > Policies and select the client-token row. Change the policy’s value so that the service_prefix section has a policy of write:

    hcl
    node_prefix "" {
    policy = "write"
    }
    service_prefix "" {
    policy = "write"
    }
    hcl
    node_prefix "" {
    policy = "write"
    }
    service_prefix "" {
    policy = "write"
    }
  5. Go back to the ACL screen and select the client-token row. Copy this token value (e.g. f62a3058-e139-7e27-75a0-f47df9e2e4bd).

  6. For each of your services, update your Deployment manifest so that the haproxy-enterprise-consul container includes the -token argument, set to the client-token value.

    yaml
    - name: haproxy-enterprise-consul
    image: hapee-plus-registry.haproxy.com/hapee-consul-connect
    args:
    args:
    - -sidecar-for=app-ui
    - -enable-intentions
    - -token=f62a3058-e139-7e27-75a0-f47df9e2e4bd
    yaml
    - name: haproxy-enterprise-consul
    image: hapee-plus-registry.haproxy.com/hapee-consul-connect
    args:
    args:
    - -sidecar-for=app-ui
    - -enable-intentions
    - -token=f62a3058-e139-7e27-75a0-f47df9e2e4bd
  7. Update the consul container’s configuration to include an acl section where you will specify the same client-token value. Also, set primary_datacenter to dc1 (or to the value you’ve set for your primary datacenter, if you have changed it).

    yaml
    - name: consul
    image: consul
    env:
    - name: CONSUL_LOCAL_CONFIG
    value: '{
    "primary_datacenter": "dc1",
    "acl": {
    "enabled": true,
    "default_policy": "allow",
    "down_policy": "extend-cache",
    "tokens": {
    "default": "f62a3058-e139-7e27-75a0-f47df9e2e4bd"
    }
    },
    "service": {
    "name": "example-service",
    "port": 80,
    "connect": {
    "sidecar_service": {}
    }
    }
    }'
    yaml
    - name: consul
    image: consul
    env:
    - name: CONSUL_LOCAL_CONFIG
    value: '{
    "primary_datacenter": "dc1",
    "acl": {
    "enabled": true,
    "default_policy": "allow",
    "down_policy": "extend-cache",
    "tokens": {
    "default": "f62a3058-e139-7e27-75a0-f47df9e2e4bd"
    }
    },
    "service": {
    "name": "example-service",
    "port": 80,
    "connect": {
    "sidecar_service": {}
    }
    }
    }'

Do you have any suggestions on how we can improve the content of this page?