Kubernetes Networking Explained — From Pod to Internet
Visual guide to Kubernetes networking layers. Understand Services, DNS resolution, CNI plugins, and Network Policies through animated diagrams and real cluster examples.
Kubernetes networking is where most people get lost. Not because the concepts are hard — but because there are four separate layers doing different jobs, and the docs explain each one in isolation. When your pod can’t reach another pod, you don’t know which layer broke.
Let’s stack them on top of each other so the full picture clicks.
1. The Four Networking Layers
Every single packet in your cluster passes through this stack. Debugging means figuring out which layer is broken. Is it DNS? Is it the service proxy? Is it the CNI routing?
Kubernetes Networking — 4 Layers Deep
Every packet traverses these layers. Understanding them is debugging 90% of connectivity issues.
The mental model that saves you: Kubernetes networking guarantees that every pod gets a unique IP and can reach every other pod without NAT. That’s the contract. The CNI plugin implementation decides HOW that contract is fulfilled — overlay networks, BGP, eBPF, whatever.
2. Service Types
Services are the stable front door to your pods. Pods come and go (they’re ephemeral). Services have stable IPs and DNS names. But there are four types, and most people only know LoadBalancer.
Service Types — When to Use What
ClusterIPInternal only
Default. Only reachable inside the cluster. Perfect for service-to-service communication. Creates a virtual IP that kube-proxy maps to healthy pod endpoints.
NodePortPort 30000-32767
Opens a static port on every node in the cluster. Traffic to any node's IP on that port reaches your pod. Simple but exposes nodes directly.
LoadBalancerCloud LB provisioned
Provisions a cloud load balancer (AWS ALB/NLB, GCP LB). Gets a real public IP. Traffic flows: Internet → LB → NodePort → Pod. Most common for production apps.
ExternalNameCNAME redirect
No proxying. Just creates a DNS CNAME record pointing to an external hostname. Useful for migrating external services into the cluster gradually.
The mistake I see in production: teams use LoadBalancer for everything, even internal services. You end up with 30 cloud load balancers ($$$) when ClusterIP would work fine. LoadBalancer is for public-facing endpoints. Everything else is ClusterIP.
3. DNS — How Pods Find Each Other
You never hardcode pod IPs. You use service names. But under the hood, that name has to resolve to an IP, then that IP has to route to an actual pod. Here’s the full flow:
DNS Resolution Inside the Cluster
When Pod A calls `http://api-service:8080`, here's what actually happens:
<service>.<namespace>.svc.cluster.localSame namespace? Just use the service name. Cross-namespace? Add the namespace:
api-service.payments Common debugging gotcha: DNS works inside the pod but the connection times out. That means DNS layer is fine — the problem is in kube-proxy (iptables rules) or the CNI layer (packet can’t reach the destination node). Check kubectl get endpoints to verify the service has backends.
4. Network Policies
The default Kubernetes posture is “everything can talk to everything.” That’s a flat network with zero segmentation. If one pod gets compromised, the attacker has lateral movement to every other pod in the cluster.
Network Policies — Your Cluster's Firewall
By default, every pod can talk to every other pod. That's terrifying.
The principle is simple: default-deny everything, then whitelist specific paths. Start with deny all ingress for sensitive namespaces (databases, secrets vaults), then add policies like “only pods with label app=api can reach the database on port 5432.”
5. CNI Plugins — The Network Brain
The Container Network Interface plugin is the thing that actually moves packets between nodes. Kubernetes doesn’t ship with one — you choose it at cluster creation. And the choice matters more than people think.
CNI Comparison — Picking Your Network Brain
The trend is clear: eBPF-based networking (Cilium) is replacing iptables-based networking (kube-proxy + Calico legacy mode). eBPF operates in kernel space without the chain-walking overhead of iptables. At scale (1000+ services), the performance difference is dramatic — O(1) lookups vs O(n) iptables chain traversal.