Containerization has changed how we deploy applications, and Kubernetes is the de facto standard for container orchestration. This article will take you from concepts to practice, giving you comprehensive mastery of Kubernetes.
Why Choose Kubernetes?
Container Orchestration Comparison
Container Orchestration Ecosystem:
┌─────────────────────────────────────────────────────┐
│ │
│ Kubernetes (K8s) │
│ ├── Industry standard, huge community │
│ ├── Comprehensive features, highly extensible │
│ ├── Native cloud provider support │
│ └── Steep learning curve │
│ │
│ Other Options: │
│ ├── Docker Swarm → Simple, good for small scale │
│ ├── Nomad → HashiCorp, lightweight │
│ ├── ECS/Fargate → AWS native, managed service │
│ └── Cloud Run → GCP serverless containers │
│ │
└─────────────────────────────────────────────────────┘
| Feature | Kubernetes | Docker Swarm | ECS |
|---|---|---|---|
| Complexity | High | Low | Medium |
| Scalability | Very High | Medium | High |
| Community | Massive | Smaller | AWS Ecosystem |
| Learning Curve | Steep | Gentle | Moderate |
| Managed Options | EKS/GKE/AKS | - | Native |
Core Concepts
Architecture Overview
Kubernetes Cluster Architecture:
┌─────────────────────────────────────────────────────┐
│ Control Plane │
│ ┌─────────────┐ ┌─────────────┐ ┌───────────────┐ │
│ │ API Server │ │ Scheduler │ │ Controller │ │
│ │ │ │ │ │ Manager │ │
│ └─────────────┘ └─────────────┘ └───────────────┘ │
│ ┌─────────────────────────────────────────────────┐│
│ │ etcd ││
│ └─────────────────────────────────────────────────┘│
└─────────────────────────────────────────────────────┘
│
┌────────────────────┼────────────────────┐
▼ ▼ ▼
┌─────────┐ ┌─────────┐ ┌─────────┐
│ Node │ │ Node │ │ Node │
│ ┌─────┐ │ │ ┌─────┐ │ │ ┌─────┐ │
│ │Pod │ │ │ │Pod │ │ │ │Pod │ │
│ │ ┌─┐ │ │ │ │ ┌─┐ │ │ │ │ ┌─┐ │ │
│ │ │C│ │ │ │ │ │C│ │ │ │ │ │C│ │ │
│ │ └─┘ │ │ │ │ └─┘ │ │ │ │ └─┘ │ │
│ └─────┘ │ │ └─────┘ │ │ └─────┘ │
│ kubelet │ │ kubelet │ │ kubelet │
└─────────┘ └─────────┘ └─────────┘
Core Resources
# Pod - Smallest deployable unit
apiVersion: v1
kind: Pod
metadata:
name: my-app
labels:
app: my-app
spec:
containers:
- name: app
image: my-app:1.0.0
ports:
- containerPort: 8080
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "256Mi"
cpu: "200m"
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 10
periodSeconds: 5
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 3
Deployment
# Deployment - Declarative deployment management
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
labels:
app: my-app
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: app
image: my-app:1.0.0
ports:
- containerPort: 8080
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: db-secret
key: url
envFrom:
- configMapRef:
name: app-config
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
Service
# ClusterIP Service - Internal cluster access
apiVersion: v1
kind: Service
metadata:
name: my-app
spec:
type: ClusterIP
selector:
app: my-app
ports:
- port: 80
targetPort: 8080
---
# NodePort Service - Node port exposure
apiVersion: v1
kind: Service
metadata:
name: my-app-nodeport
spec:
type: NodePort
selector:
app: my-app
ports:
- port: 80
targetPort: 8080
nodePort: 30080
---
# LoadBalancer Service - Cloud load balancer
apiVersion: v1
kind: Service
metadata:
name: my-app-lb
spec:
type: LoadBalancer
selector:
app: my-app
ports:
- port: 80
targetPort: 8080
Configuration Management
ConfigMap
# ConfigMap - Non-sensitive configuration
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
# Simple key-value pairs
LOG_LEVEL: "info"
CACHE_TTL: "3600"
# Configuration files
config.yaml: |
server:
host: 0.0.0.0
port: 8080
features:
enableMetrics: true
enableTracing: true
Secret
# Secret - Sensitive information
apiVersion: v1
kind: Secret
metadata:
name: db-secret
type: Opaque
data:
# Base64 encoded
username: YWRtaW4=
password: cGFzc3dvcmQxMjM=
---
# Using stringData to avoid manual encoding
apiVersion: v1
kind: Secret
metadata:
name: api-secret
type: Opaque
stringData:
api-key: "your-secret-api-key"
Using Configuration
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
template:
spec:
containers:
- name: app
image: my-app:1.0.0
# Environment variable approach
env:
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: db-secret
key: password
# Bulk injection
envFrom:
- configMapRef:
name: app-config
- secretRef:
name: api-secret
# Mount as files
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
name: app-config
Ingress Gateway
Nginx Ingress
# Ingress resource
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/ssl-redirect: "true"
cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
ingressClassName: nginx
tls:
- hosts:
- app.example.com
- api.example.com
secretName: tls-secret
rules:
- host: app.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: frontend
port:
number: 80
- host: api.example.com
http:
paths:
- path: /v1
pathType: Prefix
backend:
service:
name: api-v1
port:
number: 80
- path: /v2
pathType: Prefix
backend:
service:
name: api-v2
port:
number: 80
Advanced Routing
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: advanced-ingress
annotations:
# Rate limiting
nginx.ingress.kubernetes.io/limit-rps: "100"
# Request body size limit
nginx.ingress.kubernetes.io/proxy-body-size: "50m"
# Timeout settings
nginx.ingress.kubernetes.io/proxy-read-timeout: "300"
nginx.ingress.kubernetes.io/proxy-send-timeout: "300"
# CORS
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-origin: "https://app.example.com"
spec:
ingressClassName: nginx
rules:
- host: api.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: api
port:
number: 80
Storage Management
PersistentVolume
# PersistentVolume - Persistent storage
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-data
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: standard
hostPath:
path: /data/pv
---
# PersistentVolumeClaim - Storage claim
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: standard
StatefulSet
# StatefulSet - Stateful applications
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres
spec:
serviceName: postgres
replicas: 3
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:15
ports:
- containerPort: 5432
env:
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-secret
key: password
volumeMounts:
- name: data
mountPath: /var/lib/postgresql/data
# Independent storage per Pod
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
storageClassName: fast-ssd
resources:
requests:
storage: 20Gi
Deployment Strategies
Rolling Update
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 5
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 2 # Max 2 extra Pods
maxUnavailable: 1 # Max 1 unavailable
template:
spec:
containers:
- name: app
image: my-app:2.0.0
Blue-Green Deployment
# Blue version (current production)
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-blue
labels:
app: my-app
version: blue
spec:
replicas: 3
selector:
matchLabels:
app: my-app
version: blue
template:
metadata:
labels:
app: my-app
version: blue
spec:
containers:
- name: app
image: my-app:1.0.0
---
# Green version (new version)
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-green
labels:
app: my-app
version: green
spec:
replicas: 3
selector:
matchLabels:
app: my-app
version: green
template:
metadata:
labels:
app: my-app
version: green
spec:
containers:
- name: app
image: my-app:2.0.0
---
# Switch Service target
apiVersion: v1
kind: Service
metadata:
name: my-app
spec:
selector:
app: my-app
version: green # Switch to green version
ports:
- port: 80
targetPort: 8080
Canary Release
# Using Istio for canary releases
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: my-app
spec:
hosts:
- my-app
http:
- match:
- headers:
x-canary:
exact: "true"
route:
- destination:
host: my-app-canary
port:
number: 80
- route:
- destination:
host: my-app-stable
port:
number: 80
weight: 90
- destination:
host: my-app-canary
port:
number: 80
weight: 10
Auto Scaling
HPA (Horizontal Pod Autoscaler)
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: my-app-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: my-app
minReplicas: 2
maxReplicas: 10
metrics:
# CPU utilization
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
# Memory utilization
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
# Custom metrics
- type: Pods
pods:
metric:
name: requests_per_second
target:
type: AverageValue
averageValue: "1000"
behavior:
scaleDown:
stabilizationWindowSeconds: 300
policies:
- type: Percent
value: 10
periodSeconds: 60
scaleUp:
stabilizationWindowSeconds: 0
policies:
- type: Percent
value: 100
periodSeconds: 15
- type: Pods
value: 4
periodSeconds: 15
selectPolicy: Max
VPA (Vertical Pod Autoscaler)
apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
name: my-app-vpa
spec:
targetRef:
apiVersion: apps/v1
kind: Deployment
name: my-app
updatePolicy:
updateMode: "Auto" # Auto, Recreate, Initial, Off
resourcePolicy:
containerPolicies:
- containerName: app
minAllowed:
cpu: 100m
memory: 128Mi
maxAllowed:
cpu: 2
memory: 4Gi
controlledResources: ["cpu", "memory"]
Security Best Practices
Pod Security Context
apiVersion: v1
kind: Pod
metadata:
name: secure-pod
spec:
securityContext:
runAsNonRoot: true
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
seccompProfile:
type: RuntimeDefault
containers:
- name: app
image: my-app:1.0.0
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop:
- ALL
volumeMounts:
- name: tmp
mountPath: /tmp
volumes:
- name: tmp
emptyDir: {}
NetworkPolicy
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: api-network-policy
namespace: production
spec:
podSelector:
matchLabels:
app: api
policyTypes:
- Ingress
- Egress
ingress:
# Only allow traffic from frontend
- from:
- podSelector:
matchLabels:
app: frontend
ports:
- protocol: TCP
port: 8080
# Allow from Ingress Controller
- from:
- namespaceSelector:
matchLabels:
name: ingress-nginx
ports:
- protocol: TCP
port: 8080
egress:
# Allow database access
- to:
- podSelector:
matchLabels:
app: postgres
ports:
- protocol: TCP
port: 5432
# Allow DNS queries
- to:
- namespaceSelector: {}
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- protocol: UDP
port: 53
RBAC
# ServiceAccount
apiVersion: v1
kind: ServiceAccount
metadata:
name: app-sa
namespace: production
---
# Role - Namespace-level permissions
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: app-role
namespace: production
rules:
- apiGroups: [""]
resources: ["configmaps", "secrets"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list"]
---
# RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: app-role-binding
namespace: production
subjects:
- kind: ServiceAccount
name: app-sa
namespace: production
roleRef:
kind: Role
name: app-role
apiGroup: rbac.authorization.k8s.io
Monitoring and Logging
Prometheus Monitoring
# ServiceMonitor - Prometheus Operator
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: my-app
labels:
release: prometheus
spec:
selector:
matchLabels:
app: my-app
endpoints:
- port: metrics
interval: 30s
path: /metrics
---
# PrometheusRule - Alert rules
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: my-app-alerts
spec:
groups:
- name: my-app
rules:
- alert: HighErrorRate
expr: |
sum(rate(http_requests_total{status=~"5.."}[5m]))
/ sum(rate(http_requests_total[5m])) > 0.05
for: 5m
labels:
severity: critical
annotations:
summary: "High error rate detected"
description: "Error rate is {{ $value | humanizePercentage }}"
- alert: PodCrashLooping
expr: |
rate(kube_pod_container_status_restarts_total[15m]) > 0
for: 5m
labels:
severity: warning
Log Collection
# Fluent Bit DaemonSet
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluent-bit
namespace: logging
spec:
selector:
matchLabels:
app: fluent-bit
template:
metadata:
labels:
app: fluent-bit
spec:
serviceAccountName: fluent-bit
containers:
- name: fluent-bit
image: fluent/fluent-bit:2.1
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: config
mountPath: /fluent-bit/etc/
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: config
configMap:
name: fluent-bit-config
Helm Package Management
Chart Structure
my-app/
├── Chart.yaml
├── values.yaml
├── templates/
│ ├── _helpers.tpl
│ ├── deployment.yaml
│ ├── service.yaml
│ ├── ingress.yaml
│ ├── configmap.yaml
│ ├── secret.yaml
│ └── hpa.yaml
└── charts/ # Sub-charts
values.yaml
# values.yaml
replicaCount: 3
image:
repository: my-app
tag: "1.0.0"
pullPolicy: IfNotPresent
service:
type: ClusterIP
port: 80
ingress:
enabled: true
className: nginx
hosts:
- host: app.example.com
paths:
- path: /
pathType: Prefix
tls:
- secretName: app-tls
hosts:
- app.example.com
resources:
limits:
cpu: 200m
memory: 256Mi
requests:
cpu: 100m
memory: 128Mi
autoscaling:
enabled: true
minReplicas: 2
maxReplicas: 10
targetCPUUtilizationPercentage: 70
Template Example
# templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "my-app.fullname" . }}
labels:
{{- include "my-app.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "my-app.selectorLabels" . | nindent 6 }}
template:
metadata:
labels:
{{- include "my-app.selectorLabels" . | nindent 8 }}
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- containerPort: 8080
resources:
{{- toYaml .Values.resources | nindent 12 }}
Production Checklist
Kubernetes Production Readiness:
┌─────────────────────────────────────────────────────┐
│ │
│ Resource Configuration │
│ ├── Set requests and limits │
│ ├── Configure liveness and readiness probes │
│ ├── Use PodDisruptionBudget │
│ └── Configure anti-affinity for Pod spreading │
│ │
│ Security │
│ ├── Run as non-root user │
│ ├── Enable read-only filesystem │
│ ├── Configure NetworkPolicy │
│ ├── Use RBAC with least privilege │
│ └── Regularly update images for vulnerabilities │
│ │
│ Observability │
│ ├── Configure Prometheus metrics │
│ ├── Centralized log collection │
│ ├── Set up alerting rules │
│ └── Configure distributed tracing │
│ │
│ High Availability │
│ ├── Multi-replica deployment │
│ ├── Cross-availability zone distribution │
│ ├── Configure HPA for auto-scaling │
│ └── Regular disaster recovery drills │
│ │
└─────────────────────────────────────────────────────┘
Common Commands
# Cluster info
kubectl cluster-info
kubectl get nodes -o wide
# Resource operations
kubectl apply -f manifest.yaml
kubectl delete -f manifest.yaml
kubectl get pods -n production -o wide
kubectl describe pod my-pod -n production
# Logs and debugging
kubectl logs my-pod -c container-name -f
kubectl exec -it my-pod -- /bin/sh
kubectl port-forward svc/my-service 8080:80
# Scaling
kubectl scale deployment my-app --replicas=5
kubectl rollout status deployment my-app
kubectl rollout undo deployment my-app
# Resource usage
kubectl top nodes
kubectl top pods -n production
Summary
| Scenario | Recommended Approach |
|---|---|
| Stateless apps | Deployment + HPA |
| Stateful apps | StatefulSet + PVC |
| Batch processing | Job / CronJob |
| Daemons | DaemonSet |
| Configuration | ConfigMap + Secret |
| Traffic ingress | Ingress + TLS |
Kubernetes has a steep learning curve, but once mastered, it can manage containerized applications at any scale. Start with simple Deployments and gradually dive deeper into each area.
Container orchestration isn’t the destination—it’s the starting point of your cloud-native journey. Let Kubernetes become the core of your infrastructure as code.