Docker Containerization Guide: From Development to Deployment

Master Docker core concepts and practical techniques to build portable, scalable application deployment solutions

Docker Containerization Guide: From Development to Deployment

Docker has fundamentally changed how software is developed, tested, and deployed. “It works on my machine” is no longer an excuse—containers make environment consistency a reality. This article will take you from zero to mastering Docker containerization.

Why Do We Need Docker?

Traditional Deployment Pain Points

Traditional Deployment Problems:
┌─────────────────────────────────────────────────────┐
│ Development Env                 Production Env      │
│ ┌─────────────┐                ┌─────────────┐     │
│ │ Node 18.x   │       →        │ Node 16.x   │ ❌  │
│ │ Ubuntu 22   │                │ CentOS 7    │ ❌  │
│ │ npm 9.x     │                │ npm 8.x     │ ❌  │
│ └─────────────┘                └─────────────┘     │
│                                                     │
│ Common Issues:                                      │
│ • Dependency version mismatches                     │
│ • System library differences                        │
│ • Configuration file differences                    │
│ • Port conflicts between apps                       │
└─────────────────────────────────────────────────────┘

Docker’s Solution

Docker Containerization:
┌─────────────────────────────────────────────────────┐
│ Development Env                 Production Env      │
│ ┌─────────────┐                ┌─────────────┐     │
│ │  Container  │       =        │  Container  │ ✅  │
│ │ ┌─────────┐ │                │ ┌─────────┐ │     │
│ │ │  App    │ │                │ │  App    │ │     │
│ │ │ Node 18 │ │                │ │ Node 18 │ │     │
│ │ │ Alpine  │ │                │ │ Alpine  │ │     │
│ │ └─────────┘ │                │ └─────────┘ │     │
│ └─────────────┘                └─────────────┘     │
│                                                     │
│ Advantage: Identical environments, build once       │
│            run anywhere                             │
└─────────────────────────────────────────────────────┘

Core Concepts

Images vs Containers

Relationship Between Images and Containers:
┌─────────────────────────────────────────────────────┐
│                                                     │
│   Image (Read-only)            Container (Instance) │
│   ┌─────────────┐              ┌─────────────┐     │
│   │   Layer 4   │  docker run  │   Layer 4   │     │
│   │   Layer 3   │  ─────────→  │   Layer 3   │     │
│   │   Layer 2   │              │   Layer 2   │     │
│   │   Layer 1   │              │   Layer 1   │     │
│   │  Base Image │              │  Base Image │     │
│   └─────────────┘              ├─────────────┤     │
│                                │ Writable (R/W)│     │
│   Analogy: Class               └─────────────┘     │
│                                Analogy: Object      │
│                                                     │
│   One image can create multiple containers          │
└─────────────────────────────────────────────────────┘

Layered Filesystem

# Each instruction creates a layer
FROM node:18-alpine        # Base layer
WORKDIR /app               # New layer
COPY package*.json ./      # New layer
RUN npm install            # New layer (largest)
COPY . .                   # New layer
CMD ["npm", "start"]       # Metadata, no layer
Layer Reuse Mechanism:
┌─────────────────────────────────────────────────────┐
│ Building App A         Building App B              │
│ ┌─────────┐           ┌─────────┐                  │
│ │ App A   │           │ App B   │  ← Different     │
│ ├─────────┤           ├─────────┤                  │
│ │npm deps │           │npm deps │  ← Different     │
│ ├─────────┤           ├─────────┤                  │
│ │node:18  │ ════════  │node:18  │  ← Reused!       │
│ └─────────┘           └─────────┘                  │
│                                                     │
│ Same layers stored once, saving space and time      │
└─────────────────────────────────────────────────────┘

Writing Dockerfiles

Basic Structure

# syntax=docker/dockerfile:1

# Base image
FROM node:18-alpine

# Metadata
LABEL maintainer="dev@example.com"
LABEL version="1.0"

# Set working directory
WORKDIR /app

# Set environment variables
ENV NODE_ENV=production
ENV PORT=3000

# Copy dependency files
COPY package*.json ./

# Install dependencies
RUN npm ci --only=production

# Copy source code
COPY . .

# Expose port
EXPOSE 3000

# Health check
HEALTHCHECK --interval=30s --timeout=3s \
  CMD wget --quiet --tries=1 --spider http://localhost:3000/health || exit 1

# Startup command
CMD ["node", "server.js"]

Multi-stage Builds

Multi-stage builds can significantly reduce final image size:

# Stage 1: Build
FROM node:18-alpine AS builder

WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build

# Stage 2: Production
FROM node:18-alpine AS production

WORKDIR /app

# Copy only necessary files
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package.json ./

# Use non-root user
RUN addgroup -g 1001 -S nodejs
RUN adduser -S nextjs -u 1001
USER nextjs

EXPOSE 3000
CMD ["node", "dist/server.js"]

Optimization Tips

# ✅ Leverage caching: less changing layers first
COPY package*.json ./
RUN npm ci
COPY . .   # Source changes often, put last

# ✅ Merge RUN commands to reduce layers
RUN apt-get update && \
    apt-get install -y --no-install-recommends \
      curl \
      git \
    && rm -rf /var/lib/apt/lists/*

# ✅ Use .dockerignore to exclude unnecessary files
# .dockerignore
node_modules
.git
*.md
.env
dist

# ✅ Use specific version tags
FROM node:18.19.0-alpine3.19  # Don't use latest

# ✅ Minimize base image
FROM node:18-alpine  # Much smaller than node:18
# Or use distroless
FROM gcr.io/distroless/nodejs18-debian11

Common Commands

Image Operations

# Build image
docker build -t myapp:1.0 .
docker build -t myapp:1.0 -f Dockerfile.prod .

# View images
docker images
docker image ls

# Remove image
docker rmi myapp:1.0
docker image prune  # Remove dangling images

# Tag image
docker tag myapp:1.0 registry.example.com/myapp:1.0

# Push/pull images
docker push registry.example.com/myapp:1.0
docker pull nginx:alpine

Container Operations

# Run container
docker run -d --name myapp -p 3000:3000 myapp:1.0
docker run -it --rm node:18-alpine sh  # Interactive, remove on exit

# View containers
docker ps           # Running containers
docker ps -a        # All containers

# Container logs
docker logs myapp
docker logs -f myapp  # Follow logs

# Enter container
docker exec -it myapp sh
docker exec -it myapp /bin/bash

# Stop/start/restart
docker stop myapp
docker start myapp
docker restart myapp

# Remove container
docker rm myapp
docker rm -f myapp  # Force remove running container

# View resource usage
docker stats

Data Persistence

# Volume (recommended)
docker volume create mydata
docker run -v mydata:/app/data myapp:1.0

# Bind Mount (for development)
docker run -v $(pwd)/src:/app/src myapp:1.0

# View volumes
docker volume ls
docker volume inspect mydata

Docker Compose

Docker Compose is used to define and run multi-container applications.

Basic Configuration

# docker-compose.yml
version: '3.8'

services:
  # Web application
  app:
    build:
      context: .
      dockerfile: Dockerfile
    ports:
      - "3000:3000"
    environment:
      - NODE_ENV=production
      - DATABASE_URL=postgres://user:pass@db:5432/mydb
      - REDIS_URL=redis://cache:6379
    depends_on:
      - db
      - cache
    volumes:
      - ./logs:/app/logs
    restart: unless-stopped
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
      interval: 30s
      timeout: 10s
      retries: 3

  # Database
  db:
    image: postgres:15-alpine
    environment:
      - POSTGRES_USER=user
      - POSTGRES_PASSWORD=pass
      - POSTGRES_DB=mydb
    volumes:
      - postgres_data:/var/lib/postgresql/data
    ports:
      - "5432:5432"

  # Cache
  cache:
    image: redis:7-alpine
    volumes:
      - redis_data:/data
    command: redis-server --appendonly yes

  # Nginx reverse proxy
  nginx:
    image: nginx:alpine
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf:ro
      - ./certs:/etc/nginx/certs:ro
    depends_on:
      - app

volumes:
  postgres_data:
  redis_data:

networks:
  default:
    driver: bridge

Compose Commands

# Start services
docker compose up -d

# View service status
docker compose ps

# View logs
docker compose logs -f app

# Stop services
docker compose down

# Stop and remove volumes
docker compose down -v

# Rebuild and start
docker compose up -d --build

# Scale services
docker compose up -d --scale app=3

Development Environment Config

# docker-compose.override.yml (auto-merged)
version: '3.8'

services:
  app:
    build:
      target: development
    volumes:
      - .:/app
      - /app/node_modules  # Exclude node_modules
    environment:
      - NODE_ENV=development
    command: npm run dev

  # Development tools
  mailhog:
    image: mailhog/mailhog
    ports:
      - "1025:1025"
      - "8025:8025"

Production Best Practices

Security Hardening

# 1. Use non-root user
FROM node:18-alpine

RUN addgroup -g 1001 -S appgroup && \
    adduser -S appuser -u 1001 -G appgroup

WORKDIR /app
COPY --chown=appuser:appgroup . .

USER appuser

# 2. Read-only filesystem
docker run --read-only --tmpfs /tmp myapp:1.0

# 3. Limit capabilities
docker run --cap-drop=ALL --cap-add=NET_BIND_SERVICE myapp:1.0

# 4. Scan for vulnerabilities
docker scout cve myapp:1.0
trivy image myapp:1.0

Resource Limits

# docker-compose.yml
services:
  app:
    deploy:
      resources:
        limits:
          cpus: '0.5'
          memory: 512M
        reservations:
          cpus: '0.25'
          memory: 256M
# Or via command line
docker run --cpus=0.5 --memory=512m myapp:1.0

Log Management

services:
  app:
    logging:
      driver: json-file
      options:
        max-size: "10m"
        max-file: "3"
# Use external logging system
docker run --log-driver=fluentd myapp:1.0

Health Checks and Restart Policies

services:
  app:
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 40s
    restart: unless-stopped
    # Or: always, on-failure, no

Practical: Deploying Full Application Stack

Project Structure

myapp/
├── docker-compose.yml
├── docker-compose.prod.yml
├── .env
├── .env.example
├── app/
│   ├── Dockerfile
│   └── ...
├── nginx/
│   ├── nginx.conf
│   └── Dockerfile
└── scripts/
    ├── deploy.sh
    └── backup.sh

Production Compose

# docker-compose.prod.yml
version: '3.8'

services:
  app:
    image: registry.example.com/myapp:${VERSION:-latest}
    deploy:
      replicas: 2
      update_config:
        parallelism: 1
        delay: 10s
      restart_policy:
        condition: on-failure
        max_attempts: 3
    environment:
      - NODE_ENV=production
    env_file:
      - .env.production
    networks:
      - internal
      - web

  nginx:
    image: nginx:alpine
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
      - ./certbot/conf:/etc/letsencrypt:ro
      - ./certbot/www:/var/www/certbot:ro
    depends_on:
      - app
    networks:
      - web

  db:
    image: postgres:15-alpine
    volumes:
      - postgres_data:/var/lib/postgresql/data
      - ./backups:/backups
    environment:
      - POSTGRES_PASSWORD_FILE=/run/secrets/db_password
    secrets:
      - db_password
    networks:
      - internal

volumes:
  postgres_data:

networks:
  internal:
    internal: true
  web:

secrets:
  db_password:
    file: ./secrets/db_password.txt

Deployment Script

#!/bin/bash
# scripts/deploy.sh

set -e

VERSION=${1:-latest}
REGISTRY="registry.example.com"

echo "Deploying version: $VERSION"

# Build and push image
docker build -t $REGISTRY/myapp:$VERSION ./app
docker push $REGISTRY/myapp:$VERSION

# Deploy
VERSION=$VERSION docker compose -f docker-compose.prod.yml up -d

# Wait for health checks
sleep 30
docker compose -f docker-compose.prod.yml ps

echo "Deployment complete!"

Debugging Tips

Debugging Inside Containers

# Enter running container
docker exec -it myapp sh

# Enter as root user
docker exec -u root -it myapp sh

# View container processes
docker top myapp

# View container details
docker inspect myapp

Build Debugging

# View build process
docker build --progress=plain -t myapp .

# Stop at specific stage
docker build --target builder -t myapp:builder .

# Build without cache
docker build --no-cache -t myapp .

Network Debugging

# View networks
docker network ls
docker network inspect bridge

# Test connectivity between containers
docker exec app ping db

# View port mappings
docker port myapp

Summary

Docker containerization makes application deployment predictable and repeatable:

AspectTraditionalDocker Containers
Env ConsistencyHard to guaranteeFully consistent
Deploy SpeedSlow (config env)Fast (start container)
Resource IsolationRequires VMsLightweight isolation
ScalabilityComplexSimple (more containers)
RollbackDifficultEasy (switch image version)

Key Takeaways:

  1. Images are read-only templates, containers are running instances
  2. Multi-stage builds significantly reduce image size
  3. Docker Compose simplifies multi-container application management
  4. Production environments need security hardening and resource limits
  5. Properly leverage layer caching to speed up builds

Containerization is not just a technical upgrade—it’s a core practice of DevOps culture.


Containerization: Making “works on my machine” a thing of the past.