Kubernetes Core Concepts: Understanding Pod, Service, and Ingress
Kubernetes feels overwhelming at first. Pod, Node, Cluster, ReplicaSet, Deployment, Service, Ingress, ConfigMap, Secret — the terminology never ends. And surprisingly few articles explain how these things connect to each other.
This post explains K8s core concepts using a city infrastructure analogy — apartment buildings, streets, and information desks. Once the mental model clicks, YAML files start reading naturally.
What Problem Does Kubernetes Solve?
Creating containers with Docker is easy. But managing 100 containers in production? These problems emerge:
- Who restarts a container when it crashes?
- How do you scale out when traffic spikes?
- How do you spread containers across multiple servers?
- How do you deploy a new version without downtime?
Kubernetes (K8s) is a container orchestration platform that handles all of this automatically.
K8s Architecture Overview
┌─────────────────── Kubernetes Cluster ───────────────────┐
│ │
│ ┌─────────────────────────────────────────────────────┐ │
│ │ Control Plane │ │
│ │ ┌─────────┐ ┌──────────┐ ┌────────────────────┐ │ │
│ │ │ API │ │ etcd │ │ Scheduler / │ │ │
│ │ │ Server │ │ (state) │ │ Controller Manager│ │ │
│ │ └─────────┘ └──────────┘ └────────────────────┘ │ │
│ └─────────────────────────────────────────────────────┘ │
│ │
│ ┌────────────┐ ┌────────────┐ ┌────────────┐ │
│ │ Worker │ │ Worker │ │ Worker │ │
│ │ Node 1 │ │ Node 2 │ │ Node 3 │ │
│ │ [Pod][Pod]│ │ [Pod][Pod]│ │ [Pod] │ │
│ └────────────┘ └────────────┘ └────────────┘ │
└───────────────────────────────────────────────────────────┘
Control Plane: The brain of the cluster. Decides where to place Pods, stores state in etcd, and continuously reconciles desired state with actual state.
Worker Nodes: The servers where containers actually run. A kubelet agent receives commands from the Control Plane and executes them.
Analogy: Control Plane is city hall. Worker Nodes are districts. City hall says "maintain 5 apartments (Pods) in District 3," and District 3 makes it happen.
Pod: The Smallest Deployable Unit
What Is a Pod?
A Pod is the smallest deployable unit in K8s. It's a wrapper for one or more containers.
Pod
├── Container 1: nginx (main app)
└── Container 2: log-shipper (sidecar for log collection)
Containers within the same Pod:
- Share the same network namespace → communicate via
localhost
- Can share volumes
- Always co-located on the same Node
Analogy: A Pod is one apartment unit. Multiple residents (containers) share the same address (IP).
Pod YAML
# pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: my-app-pod
labels:
app: my-app
version: v1
spec:
containers:
- name: app
image: my-registry/my-app:v1
ports:
- containerPort: 3000
env:
- name: NODE_ENV
value: production
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "256Mi"
cpu: "500m"
readinessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 15
failureThreshold: 3
readinessProbe vs livenessProbe
| Probe | Purpose | On Failure |
|---|
| readinessProbe | "Ready to receive traffic?" | Removed from Service |
| livenessProbe | "Still alive?" | Pod restarted |
You Don't Create Pods Directly
In practice, you don't create Pods manually. If a Pod dies, it won't be automatically recreated. Use Deployments instead.
ReplicaSet: Maintaining a Pod Count
A ReplicaSet ensures "keep N copies of this Pod running." If a Pod dies, it automatically creates a new one.
# replicaset.yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: my-app-rs
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template: # this is the Pod template
metadata:
labels:
app: my-app
spec:
containers:
- name: app
image: my-registry/my-app:v1
You also don't use ReplicaSets directly — they don't support rolling updates on image changes. Deployments manage ReplicaSets for you.
Deployment: The Real Workhorse
Deployment handles Pods + ReplicaSet + rolling updates + rollback. This is what you use most of the time.
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-service
namespace: production
spec:
replicas: 3
selector:
matchLabels:
app: api-service
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
template:
metadata:
labels:
app: api-service
version: v2
spec:
containers:
- name: api-service
image: my-registry/api-service:v2
ports:
- containerPort: 3000
envFrom:
- configMapRef:
name: api-config
- secretRef:
name: api-secrets
resources:
requests:
memory: "256Mi"
cpu: "200m"
limits:
memory: "512Mi"
cpu: "1000m"
readinessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 10
periodSeconds: 5
livenessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 30
periodSeconds: 10
failureThreshold: 3
# Deploy
kubectl apply -f deployment.yaml
# Check status
kubectl rollout status deployment/api-service
# Update image
kubectl set image deployment/api-service api-service=my-registry/api-service:v3
# Rollback
kubectl rollout undo deployment/api-service
# Scale out
kubectl scale deployment api-service --replicas=5
Analogy: A Deployment is a construction contract: "Maintain 3 apartments to this spec; renovate one at a time."
Service: Giving Pods a Stable Address
Pods get a new IP every time they're recreated. A Service provides a stable IP/DNS and load balancing in front of a group of Pods.
Analogy: A Service is the apartment complex front desk. Even when residents (Pods) change, the desk number (Service IP) stays the same.
Service Types
1. ClusterIP (Default)
Assigns an IP only reachable inside the cluster. Used for service-to-service communication.
# clusterip-service.yaml
apiVersion: v1
kind: Service
metadata:
name: api-service
spec:
type: ClusterIP # default, can be omitted
selector:
app: api-service # routes to Pods with this label
ports:
- port: 80
targetPort: 3000
# Access from inside the cluster
curl http://api-service:80
curl http://api-service.production.svc.cluster.local:80 # full DNS
2. NodePort
Exposes the service on each Node's IP at a static port. Good for dev/test.
apiVersion: v1
kind: Service
metadata:
name: api-service-nodeport
spec:
type: NodePort
selector:
app: api-service
ports:
- port: 80
targetPort: 3000
nodePort: 30080 # 30000-32767 range; auto-assigned if omitted
curl http://[NODE_IP]:30080
3. LoadBalancer
Provisions a cloud provider load balancer (AWS, GCP, Azure). Use for external production traffic.
apiVersion: v1
kind: Service
metadata:
name: api-service-lb
spec:
type: LoadBalancer
selector:
app: api-service
ports:
- port: 80
targetPort: 3000
Service Type Comparison
| Type | Accessible From | Use Case |
|---|
| ClusterIP | Inside cluster only | Service-to-service calls |
| NodePort | External (via Node IP) | Dev/test access |
| LoadBalancer | External (via LB IP) | Production external exposure |
| ExternalName | DNS alias | Reference external services |
Ingress: Routing Traffic Through a Single Entry Point
Creating a LoadBalancer per service multiplies cloud LB costs. Ingress provides path-based routing through one LB to many services.
Analogy: Ingress is the building lobby directory and elevator. /api routes to backend; / routes to frontend.
# ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: main-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/rate-limit: "100"
spec:
tls:
- hosts:
- api.example.com
secretName: tls-secret
rules:
- host: api.example.com
http:
paths:
- path: /api
pathType: Prefix
backend:
service:
name: api-service
port:
number: 80
- path: /
pathType: Prefix
backend:
service:
name: frontend-service
port:
number: 80
- host: admin.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: admin-service
port:
number: 80
Ingress defines routing rules. The Ingress Controller (typically nginx-ingress or traefik) does the actual traffic handling.
# Install nginx Ingress Controller
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/cloud/deploy.yaml
ConfigMap: Separating Configuration from Code
With config external to the image, you don't need to rebuild containers per environment. ConfigMaps store non-sensitive configuration.
# configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: api-config
data:
NODE_ENV: "production"
LOG_LEVEL: "info"
DB_HOST: "postgres-service"
DB_PORT: "5432"
DB_NAME: "myapp"
app.config.json: |
{
"featureFlags": {
"newDashboard": true,
"betaSearch": false
}
}
Usage Patterns
# Inject individual keys
env:
- name: NODE_ENV
valueFrom:
configMapKeyRef:
name: api-config
key: NODE_ENV
# Inject all keys as env vars
envFrom:
- configMapRef:
name: api-config
# Mount as files
volumeMounts:
- name: config-volume
mountPath: /app/config
volumes:
- name: config-volume
configMap:
name: api-config
Secret: Managing Sensitive Data
Secrets are like ConfigMaps but Base64-encoded with stricter access controls.
# secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: api-secrets
type: Opaque
data:
# Base64-encoded values (echo -n "mypassword" | base64)
DB_PASSWORD: bXlwYXNzd29yZA==
JWT_SECRET: c3VwZXJzZWNyZXRrZXk=
REDIS_PASSWORD: cmVkaXNwYXNz
# Create from literals
kubectl create secret generic api-secrets \
--from-literal=DB_PASSWORD=mypassword \
--from-literal=JWT_SECRET=supersecretkey
# Or from a .env file
kubectl create secret generic api-secrets --from-env-file=.env.production
In real production, don't manage Secrets in YAML files. Use tools like Vault, AWS Secrets Manager, or the External Secrets Operator for dynamic injection.
Namespace: Logical Isolation
Namespaces logically divide cluster resources — useful for splitting by team or environment.
apiVersion: v1
kind: Namespace
metadata:
name: production
---
apiVersion: v1
kind: Namespace
metadata:
name: staging
kubectl get namespaces
kubectl get pods -n production
kubectl config set-context --current --namespace=production
The Full Picture
External Traffic
↓
[Ingress]
/api → api-service
/ → frontend-service
↓
[Service: api-service] (ClusterIP: 10.0.0.1:80)
- selector: app=api-service
- round-robin load balancing
↓
[Pod] [Pod] [Pod]
(app=api-service, v2)
↑
[Deployment: api-service]
- replicas: 3
- RollingUpdate strategy
- ConfigMap + Secret injected
The glue between all resources is labels and selectors. Deployments stamp labels onto Pods. Services find those Pods via selectors.
Local Development: minikube and kind
minikube
# Install (macOS)
brew install minikube
# Start cluster
minikube start --driver=docker --cpus=4 --memory=8192
# Open dashboard
minikube dashboard
# Enable Ingress addon
minikube addons enable ingress
# Expose service (NodePort)
minikube service api-service
# Delete cluster
minikube delete
kind (Kubernetes in Docker)
brew install kind
# Single-node cluster
kind create cluster --name dev-cluster
# Multi-node cluster
cat <<EOF > kind-config.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker
EOF
kind create cluster --name dev-cluster --config kind-config.yaml
# Load local image into kind
kind load docker-image my-registry/api-service:v1 --name dev-cluster
kind delete cluster --name dev-cluster
| Tool | Characteristics | Best For |
|---|
| minikube | VM/Docker-based, rich addons | Personal learning, quick start |
| kind | K8s inside Docker containers | CI pipelines, multi-node testing |
Essential kubectl Commands
# Query resources
kubectl get pods,services,deployments -n production
kubectl get all -n production
# Detailed info (including events)
kubectl describe pod [pod-name] -n production
kubectl describe deployment api-service -n production
# Logs
kubectl logs [pod-name] -n production
kubectl logs -f deployment/api-service -n production # streaming
kubectl logs [pod-name] --previous -n production # previous container
# Port forwarding (access Pod directly from local)
kubectl port-forward deployment/api-service 3000:3000 -n production
# Edit a resource
kubectl edit deployment api-service -n production
# Force restart
kubectl rollout restart deployment/api-service -n production
# Resource usage
kubectl top nodes
kubectl top pods -n production
Wrap-Up
The relationships between K8s resources:
Namespace
└── Deployment (deployment definition + rolling updates)
└── ReplicaSet (maintains Pod count)
└── Pod (container execution unit)
├── ConfigMap (non-sensitive config)
└── Secret (sensitive data)
Service (load balancing + stable address for Pods)
└── finds Pods via selector labels
Ingress (routes external traffic → Services)
└── Ingress Controller does the actual processing
The YAML looks intimidating at first, but patterns emerge quickly once you start hands-on. The "Deployment + Service + Ingress" combination covers 90% of cases. Trying it yourself with minikube or kind is ten times faster than reading theory alone.