This is the multi-page printable view of this section.
Click here to print.
Return to the regular view of this page.
Envoy Gateway
Overview of the Envoy Gateway ingress in Elastx Kubernetes CaaS
This section introduces Envoy Gateway as the ingress controller in our
Elastx Kubernetes CaaS service. We manage and upgrade the controller, the
Gateway API CRDs and the cluster-scoped
GatewayClass named eg. You create the Gateway API
objects that describe your own traffic in your own namespaces.
There are companion guides for the two ways traffic typically reaches
the cluster. Pick the one that matches your setup:
What you create
In your own namespaces:
Gateway — listeners, ports, protocols, TLS.
HTTPRoute, TCPRoute, GRPCRoute, TLSRoute — routing rules.
ClientTrafficPolicy — controls PROXY-protocol handling, TLS parameters, timeouts.
Must live in the same namespace as your Gateway.
BackendTrafficPolicy — retries, circuit breaking.
SecurityPolicy — JWT, OIDC, CORS.
BackendTLSPolicy — mTLS toward your backends.
- TLS
Certificate / Issuer (cert-manager) — typically one per namespace.
You reference the cluster GatewayClass by its name eg from your
Gateway. You do not need to create or modify any cluster-scoped resources.
Which variant fits your setup?
The OpenStack load balancer in front of Envoy runs in TCP mode in both
cases. The variants differ in how the real client IP arrives at
Envoy, and your ClientTrafficPolicy has to match.
- Direct (PROXY-protocol) mode — clients connect straight to the
load balancer. The load balancer is configured with PROXY protocol v2
and prepends a PROXY header carrying the real client IP. Your
ClientTrafficPolicy must enable proxy-protocol parsing.
See Direct (PROXY-protocol) mode.
- Proxy (X-Forwarded-For) mode — you put your own upstream proxy
(CDN, WAF, edge proxy) in front of the load balancer. That upstream
injects the real client IP into
X-Forwarded-For; the load balancer
passes the request through unchanged. Your ClientTrafficPolicy must
trust that header with the right hop count.
See Proxy (X-Forwarded-For) mode.
TLS
Both walkthroughs use a per-namespace cert-manager Issuer. This gives you
full self-service for custom domains and supports both HTTP-01 and DNS-01
validation. If you need a guide for installing cert-manager, see
Install and upgrade cert-manager.
Advanced usage
For more advanced use cases please refer to the documentation provided by
each project or contact our support:
1 - Direct (PROXY-protocol) mode
A walkthrough of setting up Envoy Gateway when your cluster’s load balancer uses PROXY protocol v2
This guide walks through setting up Envoy Gateway in a cluster where the
OpenStack load balancer is configured in TCP mode with PROXY protocol v2.
The load balancer prepends a PROXY header to each incoming connection
carrying the real client IP. Envoy parses that header and uses it for
access logs, rate limiting and X-Forwarded-For.
Note: Your ClientTrafficPolicy must set proxyProtocol.optional: false. Without it Envoy rejects incoming connections and all routes return 503.
If you are not sure which variant applies to your cluster, see the
Envoy Gateway overview.
Prerequisites
- A namespace you can use for your application. The examples below use
team-a.
- A DNS record pointing at the load balancer’s public IP. In the examples all references to
team-a.example.com must be replaced by your own domain.
- cert-manager installed in the cluster.
Create a namespace
kubectl create namespace team-a
Create a Gateway
Gateway describes the listeners. Put it in your own namespace and
reference the cluster GatewayClass named eg.
Create a file called gateway.yaml with the following content:
---
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: team-a
namespace: team-a
spec:
gatewayClassName: eg
listeners:
- name: http
port: 80
protocol: HTTP
allowedRoutes:
namespaces:
from: Same
- name: https
port: 443
protocol: HTTPS
hostname: "*.team-a.example.com"
allowedRoutes:
namespaces:
from: Same
tls:
mode: Terminate
certificateRefs:
- kind: Secret
name: team-a-tls
allowedRoutes.from: Same keeps route visibility inside your namespace.
Set it to Selector or All if you route from elsewhere.
Apply it:
kubectl apply -f gateway.yaml
The ClientTrafficPolicy attaches to the Gateway by name and tells Envoy
to parse the PROXY-v2 header from the load balancer.
Create a file called client-traffic-policy.yaml:
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: ClientTrafficPolicy
metadata:
name: team-a
namespace: team-a
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: Gateway
name: team-a
proxyProtocol:
optional: false
Note:
The policy must live in the same namespace as the Gateway. Envoy
Gateway rejects cross-namespace policy targets.
Apply it:
kubectl apply -f client-traffic-policy.yaml
Issue a TLS certificate
We use cert-manager with HTTP-01 validation through the same Gateway.
Create a file called issuer.yaml:
---
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: letsencrypt
namespace: team-a
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: platform@team-a.example.com
privateKeySecretRef:
name: letsencrypt-account
solvers:
- http01:
gatewayHTTPRoute:
parentRefs:
- name: team-a
namespace: team-a
kind: Gateway
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: team-a-tls
namespace: team-a
spec:
secretName: team-a-tls
issuerRef:
name: letsencrypt
kind: Issuer
dnsNames:
- echo.team-a.example.com
- web.team-a.example.com
Replace the email and DNS names with your own. Then apply it:
kubectl apply -f issuer.yaml
cert-manager creates a short-lived HTTPRoute on your Gateway to solve
the HTTP-01 challenge, then removes it. Prefer DNS-01 for wildcard certs.
Route traffic to your app
Create a file called route.yaml:
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: echo
namespace: team-a
spec:
parentRefs:
- name: team-a
sectionName: https
hostnames:
- echo.team-a.example.com
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- name: echo
port: 80
Apply it:
kubectl apply -f route.yaml
Verify
Check that the Gateway got an external address and that traffic flows:
kubectl -n team-a get gateway team-a -o jsonpath='{.status.addresses[0].value}'
curl -v https://echo.team-a.example.com/
The backend should see the real client IP in X-Forwarded-For and
X-Envoy-External-Address.
Common mistakes
- Forgetting
ClientTrafficPolicy — you’ll get connection resets or 503s. The load balancer is sending PROXY-protocol; Envoy must be told to expect it.
- Putting
ClientTrafficPolicy in another namespace — silently ignored. Must be colocated with the Gateway.
- Setting
proxyProtocol.optional: true — opens you up to clients that don’t send the header bypassing client-IP enforcement. Keep it false.
- Testing with
curl from outside the load balancer — PROXY-protocol traffic isn’t valid HTTP. Always go through the load balancer’s VIP.
Advanced usage
For more advanced use cases please refer to the documentation provided by
each project or contact our support:
2 - Proxy (X-Forwarded-For) mode
A walkthrough of setting up Envoy Gateway when your traffic arrives via an upstream proxy that injects X-Forwarded-For
This guide walks through setting up Envoy Gateway in a cluster where you
front the OpenStack load balancer with your own upstream proxy — for
example a CDN, WAF, or edge proxy — that terminates the client connection
and injects the real client IP into the X-Forwarded-For header. The
OpenStack load balancer itself stays in TCP passthrough; the upstream
proxy is what carries the client IP for you.
Note: Your ClientTrafficPolicy must set clientIPDetection.xForwardedFor with numTrustedHops set to the number of trusted proxies in front of Envoy. Without it Envoy will not honour the incoming X-Forwarded-For header and your access logs and rate limiting will see the load balancer’s internal IP.
If you are not sure which variant applies to your cluster, see the
Envoy Gateway overview.
Prerequisites
- A namespace you can use for your application. The examples below use
team-a.
- A DNS record pointing at the load balancer’s public IP. In the examples all references to
team-a.example.com must be replaced by your own domain.
- cert-manager installed in the cluster.
Create a namespace
kubectl create namespace team-a
Create a Gateway
Gateway describes the listeners. Put it in your own namespace and
reference the cluster GatewayClass named eg.
Create a file called gateway.yaml with the following content:
---
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: team-a
namespace: team-a
spec:
gatewayClassName: eg
listeners:
- name: http
port: 80
protocol: HTTP
allowedRoutes:
namespaces:
from: Same
- name: https
port: 443
protocol: HTTPS
hostname: "*.team-a.example.com"
allowedRoutes:
namespaces:
from: Same
tls:
mode: Terminate
certificateRefs:
- kind: Secret
name: team-a-tls
allowedRoutes.from: Same keeps route visibility inside your namespace.
Set it to Selector or All if you route from elsewhere.
Apply it:
kubectl apply -f gateway.yaml
The ClientTrafficPolicy attaches to the Gateway by name and tells Envoy
how many trusted proxies sit in front of it.
Create a file called client-traffic-policy.yaml:
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: ClientTrafficPolicy
metadata:
name: team-a
namespace: team-a
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: Gateway
name: team-a
clientIPDetection:
xForwardedFor:
numTrustedHops: 1
numTrustedHops tells Envoy how many trusted ingress proxy hops sit in
front of it. Set it to the number of upstream proxies that prepend
entries to X-Forwarded-For. For a single CDN/WAF/edge proxy in front
of the load balancer, 1 is the right value; raise it for chains of
multiple proxies.
The setting affects what Envoy itself treats as the client IP — used in
access logs, rate limiting, and the x-envoy-external-address header.
Backends always see the full X-Forwarded-For chain that arrived
plus the load balancer’s internal IP appended on the right; Envoy does
not trim entries before forwarding the request upstream. Backend code
that needs the real client IP should parse the chain itself, typically
taking the leftmost public IP.
Note:
The policy must live in the same namespace as the Gateway. Envoy
Gateway rejects cross-namespace policy targets.
Apply it:
kubectl apply -f client-traffic-policy.yaml
Some load balancer setups forward the client IP in a different header.
Use customHeader instead — it is mutually exclusive with xForwardedFor:
clientIPDetection:
customHeader:
name: X-Real-IP
Issue a TLS certificate
We use cert-manager with HTTP-01 validation through the same Gateway.
Create a file called issuer.yaml:
---
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: letsencrypt
namespace: team-a
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: platform@team-a.example.com
privateKeySecretRef:
name: letsencrypt-account
solvers:
- http01:
gatewayHTTPRoute:
parentRefs:
- name: team-a
namespace: team-a
kind: Gateway
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: team-a-tls
namespace: team-a
spec:
secretName: team-a-tls
issuerRef:
name: letsencrypt
kind: Issuer
dnsNames:
- echo.team-a.example.com
- web.team-a.example.com
Replace the email and DNS names with your own. Then apply it:
kubectl apply -f issuer.yaml
cert-manager creates a short-lived HTTPRoute on your Gateway to solve
the HTTP-01 challenge, then removes it. Prefer DNS-01 for wildcard certs.
Route traffic to your app
Create a file called route.yaml:
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: echo
namespace: team-a
spec:
parentRefs:
- name: team-a
sectionName: https
hostnames:
- echo.team-a.example.com
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- name: echo
port: 80
Apply it:
kubectl apply -f route.yaml
Verify
Check that the Gateway got an external address and that traffic flows:
kubectl -n team-a get gateway team-a -o jsonpath='{.status.addresses[0].value}'
curl -v https://echo.team-a.example.com/
The backend should see the real client IP at the left of
X-Forwarded-For. Envoy forwards the full chain to the backend
(including the load balancer IP it appends on the right) — it does not
remove entries — so the backend application is responsible for parsing
the chain and picking the leftmost public IP.
Common mistakes
- No upstream proxy in front — this variant assumes a CDN, WAF, or other proxy injects
X-Forwarded-For before traffic reaches the load balancer. Without one, no real client IP arrives, and your backend will only see the LB’s internal IP. If you have no upstream proxy, use direct (PROXY-protocol) mode instead.
- Forgetting
ClientTrafficPolicy — Envoy ignores the incoming X-Forwarded-For and treats the load balancer’s internal IP as the client. Rate limiting and access logs see the LB, not your real client.
- Putting
ClientTrafficPolicy in another namespace — silently ignored. Must be colocated with the Gateway.
- Wrong
numTrustedHops — too low and a caller can spoof the client IP by adding their own X-Forwarded-For entry. Too high and Envoy walks too far back into spoofable territory. Count one per trusted upstream proxy.
- Mixing
xForwardedFor and customHeader — they are mutually exclusive. Pick one.
Advanced usage
For more advanced use cases please refer to the documentation provided by
each project or contact our support: