Preserve Client Source IP in Kubernetes with externalTrafficPolicy: Local
When traffic enters a Kubernetes cluster through a LoadBalancer or NodePort Service, the original client IP is not always preserved.
That becomes a problem fast.
Your ingress logs show node addresses instead of real visitors. Rate limiting becomes unreliable. IP allowlists stop behaving the way you expect. Security rules, geo checks, and audit logs all become harder to trust.
The usual fix is simple:
spec:
externalTrafficPolicy: Local
But the real answer is more nuanced than “set it and forget it”. In a production cluster, Local changes traffic flow, health checks, rolling updates, and how evenly requests are spread across nodes.
In this guide, I’ll explain what externalTrafficPolicy: Local actually does, where to set it when you use ingress-nginx, what trade-offs come with it, and how to make it work well with a FastAPI application.
What externalTrafficPolicy Actually Controls
Kubernetes lets a Service choose how external traffic is forwarded.
There are two values:
| Mode | What happens | Source IP | Trade-off |
|---|---|---|---|
Cluster | Any node can receive traffic and forward it to any ready pod in the cluster | Usually lost or replaced | Better balancing, but may add an extra hop |
Local | A node only forwards traffic to pods running on that same node | Preserved | No cross-node forwarding, possible uneven traffic |
That means Local is not a logging feature. It is a routing decision.
When you choose Local, Kubernetes avoids forwarding external traffic from one node to another node for that Service. That is the reason the original source IP can stay intact.
externalTrafficPolicy: Localpreserves client IP by restricting external traffic to node-local endpoints.
The Most Important Practical Detail
If you are using an Ingress controller such as ingress-nginx, the setting usually belongs on the Ingress controller Service, not on your application’s internal ClusterIP Service.
A common path looks like this:
Client
-> Cloud Load Balancer
-> ingress-nginx Service (LoadBalancer)
-> ingress-nginx Pod
-> app Service (ClusterIP)
-> FastAPI Pod
In that flow, the “external” boundary is the Service that exposes the ingress controller.
So this is the Service that typically needs:
spec:
type: LoadBalancer
externalTrafficPolicy: Local
Your backend app Service often stays as a regular ClusterIP Service.
Example: ingress-nginx with Helm
If you install ingress-nginx with Helm, the setting is usually applied like this:
controller:
service:
externalTrafficPolicy: Local
Or with helm upgrade:
helm upgrade --install ingress-nginx ingress-nginx \
--repo https://kubernetes.github.io/ingress-nginx \
--namespace ingress-nginx \
--create-namespace \
--set controller.service.externalTrafficPolicy=Local
You can verify it with:
kubectl get svc ingress-nginx-controller \
-n ingress-nginx \
-o jsonpath='{.spec.externalTrafficPolicy}{"\n"}'
If Kubernetes allocates a health check port, you can inspect that too:
kubectl get svc ingress-nginx-controller \
-n ingress-nginx \
-o jsonpath='{.spec.healthCheckNodePort}{"\n"}'
What Changes When You Switch to Local
1. Source IP can be preserved
This is the reason most teams enable it.
Instead of seeing the node IP or an internal hop, the ingress layer can now see the original client IP more directly.
For IP-based rate limiting, country filtering, request tracing, and accurate access logs, this is usually the desired behavior.
2. Nodes without local endpoints stop forwarding traffic
This is the part people often miss.
With externalTrafficPolicy: Local, if a node receives traffic for the Service but has no local pod for that Service, kube-proxy will not forward the traffic to another node.
That is expected behavior.
This is why health checks matter so much with Local: the external load balancer should send traffic only to nodes that actually have a local endpoint.
3. Traffic can become uneven
Local preserves source IP by removing cross-node forwarding, but that also means balancing is only as good as your pod placement.
If one node runs more ingress pods than another, or if some nodes have none at all, request distribution may become less even.
4. Rolling updates matter more
During rollouts, a node may temporarily lose its local ingress pod before the load balancer has fully stopped sending traffic there.
Modern Kubernetes behavior helps with draining, especially around terminating endpoints, but this is still something to think about in production.
Make Local Reliable in Production
To make externalTrafficPolicy: Local work well, focus on pod placement.
Spread ingress pods across nodes
A basic strategy is to distribute ingress pods as evenly as possible:
apiVersion: apps/v1
kind: Deployment
metadata:
name: ingress-nginx-controller
spec:
replicas: 3
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
spec:
topologySpreadConstraints:
- maxSkew: 1
topologyKey: kubernetes.io/hostname
whenUnsatisfiable: DoNotSchedule
labelSelector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
If your cluster spans multiple zones, also spread across zones when appropriate.
Keep enough replicas
If you only run one ingress pod in a multi-node cluster, Local will work, but only one node can actually serve traffic correctly.
That is usually not what you want for production.
Consider a DaemonSet for specific patterns
On some bare-metal or NodePort setups, teams choose a DaemonSet so every node has an ingress pod. That can make Local more predictable, because every eligible node has a local endpoint.
It is not mandatory, but it is a useful design option.
Local Solves One Layer, Not Every Layer
Even after you preserve the source IP at the Service layer, your app may still sit behind ingress-nginx, which means request metadata is often passed through headers.
For HTTP traffic, ingress-nginx commonly relies on forwarded headers such as X-Forwarded-For.
If there are extra proxies in front of the cluster, for example a cloud load balancer, CDN, WAF, or another reverse proxy, you should configure ingress-nginx to trust the right proxy CIDRs.
A typical configuration looks like this:
controller:
config:
use-forwarded-headers: "true"
proxy-real-ip-cidr: "10.0.0.0/8"
If your upstream load balancer supports the PROXY protocol, that is another option:
controller:
config:
use-proxy-protocol: "true"
In more complex chains, both may be needed together.
FastAPI Example
If your app is behind ingress-nginx, you normally want FastAPI and Uvicorn to trust forwarded headers only from trusted proxies.
Run Uvicorn with trusted forwarded headers
uvicorn app.main:app \
--host 0.0.0.0 \
--port 8000 \
--proxy-headers \
--forwarded-allow-ips="10.0.0.0/8,127.0.0.1"
If your application is reachable only through your trusted ingress proxy, some teams use:
uvicorn app.main:app --proxy-headers --forwarded-allow-ips="*"
Use that only when direct client access to the app is not possible.
Inspect the client IP in FastAPI
from fastapi import FastAPI, Request
app = FastAPI()
@app.get("/debug/ip")
async def debug_ip(request: Request):
x_forwarded_for = request.headers.get("x-forwarded-for")
real_ip = request.headers.get("x-real-ip")
client_host = request.client.host if request.client else None
return {
"client_host": client_host,
"x_forwarded_for": x_forwarded_for,
"x_real_ip": real_ip,
}
In practice:
request.client.hostis useful only if your ASGI server is configured to trust the proxy headers correctly.X-Forwarded-Foris often the easiest debug signal when you are validating the full chain.- For rate limiting and IP-based security, many teams enforce policy at the ingress layer first, not only in the application.
How to Test That It Works
A simple way to validate the setup is:
- Expose your ingress controller with
externalTrafficPolicy: Local. - Deploy a tiny debug app that returns request headers and the client address.
- Send requests from outside the cluster.
- Compare what you see in:
- the cloud load balancer behavior,
- ingress-nginx access logs,
- the application response.
For ingress-nginx logs, remote_addr is especially useful.
If you still see internal node addresses instead of real client IPs, check these first:
externalTrafficPolicyis set on the correct Service.- the load balancer health checks are targeting the right nodes.
- ingress pods are spread across nodes.
proxy-real-ip-cidrtrusts the actual upstream proxy ranges.- PROXY protocol settings match on both sides.
- FastAPI/Uvicorn trusts forwarded headers from the ingress layer.
When You Should Use Local
externalTrafficPolicy: Local is a good fit when:
- you need the real client IP for logs or security rules,
- you use ingress-level rate limiting or allowlists,
- you want to avoid an extra hop for external traffic,
- you can place ingress pods across nodes intentionally.
It is a weaker fit when:
- you care more about simple, even traffic spread than source IP visibility,
- your ingress pods are sparsely placed,
- your environment already preserves source IP another way,
- you have not planned for health-check behavior and rollout edge cases.
One Important Exception: Cilium Ingress
If you use Cilium’s ingress implementation, do not blindly copy the usual Local advice.
Cilium documents that, for its ingress path, source IP visibility can remain available even without requiring externalTrafficPolicy: Local in the same way as traditional kube-proxy-based setups.
So if your cluster uses Cilium ingress, validate the actual behavior before assuming you need the same configuration as ingress-nginx.
Final Takeaway
externalTrafficPolicy: Local is one of those Kubernetes settings that looks tiny but changes a lot.
It can preserve the real client IP and remove an unnecessary cross-node hop, which is excellent for ingress logging, rate limiting, and security controls.
But it also means:
- nodes without local endpoints do not forward traffic,
- health checks become part of the routing contract,
- pod placement matters much more,
- rollouts and draining deserve attention.
If you are using ingress-nginx, the most common production answer is:
- set
externalTrafficPolicy: Localon the ingress controller Service, - spread ingress pods across nodes,
- configure forwarded headers or PROXY protocol correctly,
- make your FastAPI app trust only the proxies you actually control.
That combination is usually what turns “why do I only see node IPs?” into a setup you can rely on.
References
- OneUptime article that inspired the topic - oneuptime[.]com/blog/post/2026-02-09-externaltrafficpolicy-local-source-ip/view
- Kubernetes: Using Source IP - kubernetes[.]io/docs/tutorials/services/source-ip/
- Kubernetes: Create an External Load Balancer - kubernetes[.]io/docs/tasks/access-application-cluster/create-external-load-balancer/
- Kubernetes: Virtual IPs and Service Proxies - kubernetes[.]io/docs/reference/networking/virtual-ips/
- ingress-nginx Helm chart values - github[.]com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/README.md
- ingress-nginx source IP guidance - kubernetes.github[.]io/ingress-nginx/user-guide/miscellaneous/
- ingress-nginx log format variables - kubernetes.github[.]io/ingress-nginx/user-guide/nginx-configuration/log-format/
- FastAPI behind a proxy - fastapi.tiangolo[.]com/advanced/behind-a-proxy/
- Uvicorn deployment and proxy headers - uvicorn[.]dev/deployment/
- Cilium ingress source IP visibility - docs.cilium[.]io/en/stable/network/servicemesh/ingress/