Containerization with Docker and orchestration via Docker Compose simplifies application deployment by packaging software and its dependencies into isolated, portable containers.
Docker ensures consistent behavior across development, testing, and production environments, while Docker Compose allows multiple related containers, such as application services, databases, and caches, to be defined and managed together using a single configuration file.
What is Containerization and Why Docker?
Containerization virtualizes the operating system, bundling an app with just what's needed to run it—unlike bulky VMs that emulate full hardware.
Docker leads the field with over 13 million developers using it in 2025, per Docker's annual report, thanks to its open-source engine and vast image registry.
Core Concepts of Docker
Containers share the host OS kernel for efficiency, starting in seconds and using mere megabytes.

Practical Example: Imagine your FastAPI weather API. Without Docker, pip install varies by OS. With Docker, one docker run spins it up identically.
Installing Docker and Getting Started
Start with Docker Desktop for local dev—it bundles Docker Engine, CLI, and Compose.
Download from docker.com (free for personal use); supports Windows, macOS, Linux with auto-updates to v27+ in 2026.
Step-by-Step Installation and First Run
Follow these numbered steps for a smooth setup.
1. Download and Install: Grab Docker Desktop; during setup, enable Kubernetes if planning orchestration later.
2. Verify Installation: Open terminal and run docker --version (expect 27.x) and docker compose version (v2.29+).
3. Test with Hello World: Run docker run hello-world—it pulls a tiny image and prints a success message.
4. Prune Unused Resources: Use docker system prune weekly to free space from stopped containers.
Pro Tip: On Linux (common for your Delhi-based dev setup), add your user to the docker group: sudo usermod -aG docker $USER, then log out/in.
Building Your First Docker Image for a Web API
A Dockerfile is your recipe; use multi-stage builds for slimmer production images (best practice per Docker's 2025 guidelines).
For our FastAPI example, assume a project with main.py, requirements.txt, and uvicorn.
Writing a Production-Ready Dockerfile
Here's a sample for a FastAPI app—copy-paste ready.
# Stage 1: Builder (install deps)
FROM python:3.12-slim as builder
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Stage 2: Runtime (copy only essentials)
FROM python:3.12-slim
WORKDIR /app
COPY --from=builder /usr/local/lib/python3.12/site-packages /usr/local/lib/python3.12/site-packages
COPY . .
EXPOSE 8000
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]Two key benefits: Reduces image size from 1GB to ~150MB and enhances security by excluding source in prod.
Building, Running, and Debugging Containers
Build and test iteratively.
1. Build Image: In project root, docker build -t my-fastapi-api .
2. Run Container: docker run -p 8000:8000 my-fastapi-api—visit localhost:8000/docs.
3. Inspect Logs: docker logs <container-id> for troubleshooting.
4. Exec into Container: docker exec -it <container-id> bash to poke around.
5. Stop and Remove: docker stop <id> then docker rm <id>.
Example Output
> curl localhost:8000/weather/delhi
{"temp": 28, "condition": "sunny"}Use docker ps for running containers and docker images for local repo.
Persistent Data and Networking Basics
Containers are stateless by default—data vanishes on restart.
Volumes mount host directories; networks enable service communication.
Managing Volumes and Networks
1. Volumes: docker run -v /host/path:/container/path my-api persists DB files.
2. Named Volumes: docker volume create api-data for managed storage.
3. Networks: docker network create api-net isolates traffic.
Docker Compose: Orchestrating Multi-Service APIs
Docker Compose (now docker compose CLI) defines apps as YAML—perfect for full-stack: API + PostgreSQL + Redis.
It's declarative: Describe desired state, Compose builds/runs it. Supports v3.9+ schemas with healthchecks (2025 standard).
Creating a docker-compose.yml for Full-Stack API
Intro: This file orchestrates your FastAPI backend, Postgres DB, and pgAdmin for management.
Sample docker-compose.yml
services:
api:
build: .
ports:
- "8000:8000"
environment:
- DATABASE_URL=postgresql://user:pass@db:5432/mydb
depends_on:
db:
condition: service_healthy
networks:
- app-net
db:
image: postgres:16
environment:
POSTGRES_DB: mydb
POSTGRES_USER: user
POSTGRES_PASSWORD: pass
volumes:
- db-data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U user"]
interval: 10s
retries: 5
networks:
- app-net
pgadmin:
image: dpage/pgadmin4:latest
ports:
- "8080:80"
environment:
PGADMIN_DEFAULT_EMAIL: admin@example.com
PGADMIN_DEFAULT_PASSWORD: admin
depends_on:
db: condition: service_healthy
networks:
- app-net
volumes:
db-data:
networks:
app-net:Running and Scaling with Compose
1. Startup: docker compose up -d (detached mode).
2. View Logs: docker compose logs api or docker compose logs -f for live tail.
3. Scale Services: docker compose up --scale api=3 -d for load balancing.
4. Tear Down: docker compose down -v (removes volumes too).
5. Production Tweaks: Add profiles: ["prod"] for env-specific services.
Real-World Win: Your API queries Postgres via db hostname—no hardcoded IPs. Visit pgadmin at localhost:8080.
Best Practices and Production Deployment
Leverage multi-stage builds, non-root users, and vulnerability scans (e.g., docker scout).
Security and Optimization Checklist
1. Scan Images: docker scout cves my-image flags CVEs.
2. Run as Non-Root: Add USER appuser in Dockerfile.
3. Layer Caching: Order Dockerfile COPY after RUN pip for faster rebuilds.
4. Secrets Management: Use .env files or Docker Secrets in Swarm mode.
5. Deploy to Cloud: Push to ECR/GCR, then Kubernetes for auto-scaling.
Monitoring Tip: Integrate Prometheus: Add a service exposing /metrics endpoint.
For full-stack, expose API via Traefik reverse proxy in Compose for HTTPS.