Networking
Manage Kubernetes networking resources including Services, Ingress, NetworkPolicies, and Endpoints for pod communication and external access.
Networking Resources
Services
Expose pods via stable DNS names and load-balance traffic across pod replicas.
Viewing Services
| Column | Description |
|---|---|
| Name | Service name |
| Namespace | Kubernetes namespace |
| Type | ClusterIP, NodePort, LoadBalancer |
| Cluster IP | Internal IP for service |
| External IP | Public IP (LoadBalancer only) |
| Ports | Port mappings (80:8080, etc.) |
| Selector | Label selector for pods |
| Age | Time since creation |
Service Types
ClusterIP (Default)
Exposes service on cluster-internal IP only.
Use Cases:
- Internal communication between services
- Database accessed only by application pods
- Backend APIs not exposed publicly
Access:
- From within cluster:
http://service-name.namespace.svc.cluster.local - Not accessible from outside cluster
Example:
apiVersion: v1
kind: Service
metadata:
name: database
namespace: production
spec:
type: ClusterIP
selector:
app: postgres
ports:
- port: 5432
targetPort: 5432
Result:
NAME TYPE CLUSTER-IP PORT(S) AGE
database ClusterIP 10.100.50.12 5432/TCP 5d
NodePort
Exposes service on each node's IP at a static port (30000-32767).
Use Cases:
- Development/testing access to services
- Legacy systems requiring specific port access
- Environments without LoadBalancer support
Access:
- From outside cluster:
http://<node-ip>:<node-port> - From within cluster:
http://service-name:port(ClusterIP still works)
Example:
apiVersion: v1
kind: Service
metadata:
name: web-app
spec:
type: NodePort
selector:
app: web
ports:
- port: 80
targetPort: 8080
nodePort: 30100
Result:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
web-app NodePort 10.100.75.30 <none> 80:30100/TCP 2h
Access: http://any-node-ip:30100
LoadBalancer
Provisions external load balancer (AWS ELB/NLB, GCP LB, Azure LB) to distribute traffic.
Use Cases:
- Production applications needing public access
- High-availability services
- Applications requiring SSL termination
Access:
- External load balancer DNS/IP
- Load balancer distributes traffic across all matching pods
Example:
apiVersion: v1
kind: Service
metadata:
name: api-gateway
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
spec:
type: LoadBalancer
selector:
app: api
ports:
- port: 443
targetPort: 8443
protocol: TCP
Result:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
api-gateway LoadBalancer 10.100.120.45 a1b2c3.us-east-1.elb.amazonaws.com 443:31234/TCP
Access: https://a1b2c3.us-east-1.elb.amazonaws.com
Cloud Provider Annotations:
AWS:
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb" # Use NLB instead of CLB
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:..." # SSL certificate
service.beta.kubernetes.io/aws-load-balancer-internal: "true" # Internal LB
GCP:
annotations:
cloud.google.com/load-balancer-type: "Internal" # Internal LB
Azure:
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true" # Internal LB
Service Endpoints
View which pods are receiving traffic from a service:
Navigate to Service Details
Click service name to open detail panel.
View Endpoints Tab
Shows all pod IPs and ports backing the service.
Endpoint Status
- Ready: Pod passes readiness checks, receives traffic
- Not Ready: Pod failing readiness checks, removed from rotation
Example Endpoints:
Service: api-gateway
─────────────────────────────────────
Endpoints:
10.244.1.5:8080 (pod: api-gateway-7d9f8c6b4d-2xkjp) Ready
10.244.2.8:8080 (pod: api-gateway-7d9f8c6b4d-9vwrt) Ready
10.244.3.12:8080 (pod: api-gateway-7d9f8c6b4d-kp3mn) Not Ready
Session Affinity
Route requests from same client to same pod:
spec:
sessionAffinity: ClientIP
sessionAffinityConfig:
clientIP:
timeoutSeconds: 10800 # 3 hours
Use Cases:
- Stateful applications with local session storage
- WebSocket connections
- Shopping carts with session state
Ingress
HTTP/HTTPS traffic routing rules with host-based and path-based routing.
Viewing Ingress
| Column | Description |
|---|---|
| Name | Ingress name |
| Namespace | Kubernetes namespace |
| Hosts | Domain names (api.example.com) |
| Address | Load balancer IP/DNS |
| Ports | 80, 443 |
| Age | Time since creation |
Ingress Controllers
Ingress resources require an ingress controller to function:
| Controller | Cloud Support | Features |
|---|---|---|
| NGINX | All clouds, self-hosted | Most popular, wide feature support, SSL, rewrites |
| AWS ALB | AWS only | Native ALB integration, WAF support, TargetGroupBinding |
| GCE | GCP only | Native GCP LB integration, Cloud Armor |
| Traefik | All clouds | Auto SSL (Let's Encrypt), dynamic config, middleware |
| Istio | All clouds | Service mesh features, advanced routing, mTLS |
Path-Based Routing
Route requests to different services based on URL path:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-ingress
namespace: production
spec:
ingressClassName: nginx
rules:
- host: example.com
http:
paths:
- path: /api
pathType: Prefix
backend:
service:
name: api-service
port:
number: 8080
- path: /admin
pathType: Prefix
backend:
service:
name: admin-service
port:
number: 3000
- path: /
pathType: Prefix
backend:
service:
name: frontend
port:
number: 80
Routing Behavior:
https://example.com/api/users → api-service:8080
https://example.com/admin/login → admin-service:3000
https://example.com/ → frontend:80
https://example.com/about → frontend:80
Host-Based Routing
Route traffic based on hostname (virtual hosting):
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: multi-host-ingress
spec:
ingressClassName: nginx
rules:
- host: api.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: api-service
port:
number: 8080
- host: admin.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: admin-service
port:
number: 3000
Routing Behavior:
https://api.example.com/users → api-service:8080
https://admin.example.com/login → admin-service:3000
TLS/SSL Configuration
Configure HTTPS with TLS certificates:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: secure-ingress
spec:
ingressClassName: nginx
tls:
- hosts:
- example.com
- www.example.com
secretName: example-com-tls
rules:
- host: example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web
port:
number: 80
TLS Secret:
apiVersion: v1
kind: Secret
metadata:
name: example-com-tls
type: kubernetes.io/tls
data:
tls.crt: <base64-encoded-cert>
tls.key: <base64-encoded-key>
Ingress Annotations
Common annotations for NGINX ingress controller:
metadata:
annotations:
# SSL redirect
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
# Increase body size limit (file uploads)
nginx.ingress.kubernetes.io/proxy-body-size: "100m"
# WebSocket support
nginx.ingress.kubernetes.io/websocket-services: "chat-service"
# Rate limiting
nginx.ingress.kubernetes.io/limit-rps: "100"
# Custom timeout
nginx.ingress.kubernetes.io/proxy-read-timeout: "300"
# CORS headers
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-origin: "https://example.com"
NetworkPolicies
Firewall rules controlling which pods can communicate with each other.
Viewing NetworkPolicies
| Column | Description |
|---|---|
| Name | NetworkPolicy name |
| Namespace | Kubernetes namespace |
| Pod Selector | Pods affected by this policy |
| Policy Types | Ingress, Egress, or both |
| Age | Time since creation |
How NetworkPolicies Work
NetworkPolicies require a CNI plugin that supports them (Calico, Cilium, Weave Net). AWS VPC CNI, by default, does not enforce NetworkPolicies.
Default Behavior (No Policies):
- All pods can communicate with all other pods
- All pods can communicate with external networks
- No restrictions
With NetworkPolicies:
- Pods are isolated based on policy rules
- Only explicitly allowed traffic is permitted
- Default deny all traffic not matching any rule
Allow Traffic from Specific Pods
Allow only frontend pods to access backend API:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-frontend-to-backend
namespace: production
spec:
podSelector:
matchLabels:
app: backend-api
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
ports:
- protocol: TCP
port: 8080
Effect:
- Backend API pods only accept traffic from frontend pods on port 8080
- All other traffic to backend pods is denied
- Frontend pods can still communicate with any other pod (no egress policy)
Deny All Traffic (Namespace Isolation)
Deny all ingress traffic to pods in namespace:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all-ingress
namespace: production
spec:
podSelector: {} # Applies to all pods in namespace
policyTypes:
- Ingress
Effect:
- All pods in namespace cannot receive any traffic
- Egress traffic still allowed
- Useful as baseline before adding specific allow rules
Allow Traffic from Specific Namespaces
Allow monitoring namespace to scrape metrics from all pods:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-monitoring
namespace: production
spec:
podSelector: {}
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
name: monitoring
ports:
- protocol: TCP
port: 9090
Egress Policies
Control outbound traffic from pods:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: database-egress
namespace: production
spec:
podSelector:
matchLabels:
app: database
policyTypes:
- Egress
egress:
- to:
- podSelector:
matchLabels:
app: backup-service
ports:
- protocol: TCP
port: 3306
- to: # Allow DNS
- namespaceSelector:
matchLabels:
name: kube-system
ports:
- protocol: UDP
port: 53
Effect:
- Database pods can only send traffic to:
- backup-service pods on port 3306
- DNS pods (kube-system namespace) on port 53
- All other outbound traffic denied
- Prevents compromised database from exfiltrating data
IP Block Rules
Allow/deny traffic based on CIDR blocks:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-external-ip
namespace: production
spec:
podSelector:
matchLabels:
app: api
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: 203.0.113.0/24 # Allow this IP range
except:
- 203.0.113.10/32 # Except this specific IP
Endpoints
IP addresses and ports of pods backing a service, automatically managed by Kubernetes.
Viewing Endpoints
| Column | Description |
|---|---|
| Name | Endpoint name (matches service name) |
| Namespace | Kubernetes namespace |
| Addresses | Pod IP:Port pairs |
| Age | Time since creation |
How Endpoints Work
Service Created
Service defines selector matching pods.
Endpoints Auto-Created
Kubernetes creates Endpoints object with same name as service.
Pods Matched
Controller finds all pods matching service selector.
IPs Populated
Pod IPs and ports added to Endpoints object.
Dynamic Updates
As pods are created/deleted, Endpoints automatically updated.
Endpoints Example
Service:
apiVersion: v1
kind: Service
metadata:
name: web
spec:
selector:
app: nginx
ports:
- port: 80
targetPort: 8080
Matching Pods:
NAME IP READY
nginx-7d9f8c6b4d-abc12 10.244.1.5 True
nginx-7d9f8c6b4d-def34 10.244.2.8 True
nginx-7d9f8c6b4d-ghi56 10.244.3.12 False
Resulting Endpoints:
apiVersion: v1
kind: Endpoints
metadata:
name: web
subsets:
- addresses:
- ip: 10.244.1.5
targetRef:
kind: Pod
name: nginx-7d9f8c6b4d-abc12
- ip: 10.244.2.8
targetRef:
kind: Pod
name: nginx-7d9f8c6b4d-def34
ports:
- port: 8080
protocol: TCP
Note: Pod ghi56 not included because READY: False (failing readiness check).
Manual Endpoints
Create service pointing to external resource:
apiVersion: v1
kind: Service
metadata:
name: external-database
spec:
ports:
- port: 3306
---
apiVersion: v1
kind: Endpoints
metadata:
name: external-database
subsets:
- addresses:
- ip: 192.168.1.100
ports:
- port: 3306
Use Cases:
- Legacy database outside Kubernetes
- External API service
- Migration from VMs to containers
Troubleshooting Networking
Cause: No pods match the service selector, or all pods are failing readiness checks.
Solution: Check service selector matches pod labels exactly (case-sensitive). Verify pods are Ready with kubectl get pods. Check pod readiness probe configuration.
Cause: NetworkPolicy blocking traffic, DNS resolution failure, or service misconfiguration.
Solution: Test DNS with nslookup service-name from pod. Check NetworkPolicies with kubectl get netpol. Verify service ClusterIP and ports.
Cause: Cloud provider load balancer provisioning failed or cloud controller not running.
Solution: Check service events for errors. Verify cloud provider credentials. Ensure cloud controller manager is running. Check AWS/GCP/Azure quotas for load balancers.
Cause: Ingress controller not installed, wrong ingressClassName, or DNS not pointing to ingress LB.
Solution: Verify ingress controller is running. Check ingressClassName matches controller. Confirm DNS points to ingress LoadBalancer address. Test with curl directly to LB IP.