Direct (PROXY-protocol) mode

A walkthrough of setting up Envoy Gateway when your cluster’s load balancer uses PROXY protocol v2

This guide walks through setting up Envoy Gateway in a cluster where the OpenStack load balancer is configured in TCP mode with PROXY protocol v2. The load balancer prepends a PROXY header to each incoming connection carrying the real client IP. Envoy parses that header and uses it for access logs, rate limiting and X-Forwarded-For.

Note: Your ClientTrafficPolicy must set proxyProtocol.optional: false. Without it Envoy rejects incoming connections and all routes return 503.

If you are not sure which variant applies to your cluster, see the Envoy Gateway overview.

Prerequisites

  • A namespace you can use for your application. The examples below use team-a.
  • A DNS record pointing at the load balancer’s public IP. In the examples all references to team-a.example.com must be replaced by your own domain.
  • cert-manager installed in the cluster.

Create a namespace

kubectl create namespace team-a

Create a Gateway

Gateway describes the listeners. Put it in your own namespace and reference the cluster GatewayClass named eg.

Create a file called gateway.yaml with the following content:

---
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: team-a
  namespace: team-a
spec:
  gatewayClassName: eg
  listeners:
    - name: http
      port: 80
      protocol: HTTP
      allowedRoutes:
        namespaces:
          from: Same
    - name: https
      port: 443
      protocol: HTTPS
      hostname: "*.team-a.example.com"
      allowedRoutes:
        namespaces:
          from: Same
      tls:
        mode: Terminate
        certificateRefs:
          - kind: Secret
            name: team-a-tls

allowedRoutes.from: Same keeps route visibility inside your namespace. Set it to Selector or All if you route from elsewhere.

Apply it: kubectl apply -f gateway.yaml

Configure proxy-protocol with ClientTrafficPolicy

The ClientTrafficPolicy attaches to the Gateway by name and tells Envoy to parse the PROXY-v2 header from the load balancer.

Create a file called client-traffic-policy.yaml:

apiVersion: gateway.envoyproxy.io/v1alpha1
kind: ClientTrafficPolicy
metadata:
  name: team-a
  namespace: team-a
spec:
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: Gateway
      name: team-a
  proxyProtocol:
    optional: false

Note: The policy must live in the same namespace as the Gateway. Envoy Gateway rejects cross-namespace policy targets.

Apply it: kubectl apply -f client-traffic-policy.yaml

Issue a TLS certificate

We use cert-manager with HTTP-01 validation through the same Gateway.

Create a file called issuer.yaml:

---
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
  name: letsencrypt
  namespace: team-a
spec:
  acme:
    server: https://acme-v02.api.letsencrypt.org/directory
    email: platform@team-a.example.com
    privateKeySecretRef:
      name: letsencrypt-account
    solvers:
      - http01:
          gatewayHTTPRoute:
            parentRefs:
              - name: team-a
                namespace: team-a
                kind: Gateway
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: team-a-tls
  namespace: team-a
spec:
  secretName: team-a-tls
  issuerRef:
    name: letsencrypt
    kind: Issuer
  dnsNames:
    - echo.team-a.example.com
    - web.team-a.example.com

Replace the email and DNS names with your own. Then apply it: kubectl apply -f issuer.yaml

cert-manager creates a short-lived HTTPRoute on your Gateway to solve the HTTP-01 challenge, then removes it. Prefer DNS-01 for wildcard certs.

Route traffic to your app

Create a file called route.yaml:

apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: echo
  namespace: team-a
spec:
  parentRefs:
    - name: team-a
      sectionName: https
  hostnames:
    - echo.team-a.example.com
  rules:
    - matches:
        - path:
            type: PathPrefix
            value: /
      backendRefs:
        - name: echo
          port: 80

Apply it: kubectl apply -f route.yaml

Verify

Check that the Gateway got an external address and that traffic flows:

kubectl -n team-a get gateway team-a -o jsonpath='{.status.addresses[0].value}'
curl -v https://echo.team-a.example.com/

The backend should see the real client IP in X-Forwarded-For and X-Envoy-External-Address.

Common mistakes

  • Forgetting ClientTrafficPolicy — you’ll get connection resets or 503s. The load balancer is sending PROXY-protocol; Envoy must be told to expect it.
  • Putting ClientTrafficPolicy in another namespace — silently ignored. Must be colocated with the Gateway.
  • Setting proxyProtocol.optional: true — opens you up to clients that don’t send the header bypassing client-IP enforcement. Keep it false.
  • Testing with curl from outside the load balancer — PROXY-protocol traffic isn’t valid HTTP. Always go through the load balancer’s VIP.

Advanced usage

For more advanced use cases please refer to the documentation provided by each project or contact our support: