
Docker Compose: One Command to Set Up Your Entire Dev Environment
Tired of writing setup docs for every new team member? docker compose up spins up your DB, Redis, and app server in one command.

Tired of writing setup docs for every new team member? docker compose up spins up your DB, Redis, and app server in one command.
How to deploy without shutting down servers. Differences between Rolling, Canary, and Blue-Green. Deep dive into Database Rollback strategies, Online Schema Changes, AWS CodeDeploy integration, and Feature Toggles.

Solving server waste at dawn and crashes at lunch. Understanding Auto Scaling vs Serverless through 'Taxi Dispatch' and 'Pizza Delivery' analogies. Plus, cost-saving tips using Spot Instances.

Why your server isn't hacked. From 'Packet Filtering' checking ports/IPs to AWS Security Groups. Evolution of Firewalls.

Why would Netflix intentionally shut down its own production servers? Explore the philosophy of Chaos Engineering, the Simian Army, and detailed strategies like GameDays and Automating Chaos to build resilient distributed systems.

I still remember the afternoon a new developer joined our tiny team. I was excited — another person to share the load. I sent over the setup doc I'd been maintaining for months. Thirty steps. PostgreSQL installation, Node.js version pinning with nvm, Redis setup, environment variable configuration, a note about a specific macOS System Preferences setting that nobody ever remembers to check.
Two hours later: "Hey, PostgreSQL won't install. I'm on an M1 Mac — is that why?"
An hour after that: "Node version looks right but npm install keeps failing with some native module error."
I spent the rest of that afternoon doing nothing but debugging another person's laptop. And I remember thinking: there has to be a better way to do this.
The core problem was clear once I saw it. Everyone's machine is slightly different. Different chip architecture. Different OS version. Different global packages installed for previous projects. Different system paths. When you write "install PostgreSQL 15" in a setup doc, you're quietly assuming everyone starts from the same place. They don't.
That's when I started looking seriously at Docker Compose. And it fixed the problem in a way that actually stuck.
My first instinct when someone said "Docker" was to think: oh, a virtual machine. I'd used VMs before. Slow to start. Ate memory like it was free. Required a full OS image sitting on disk.
I was wrong about Docker, and the distinction matters.
A virtual machine runs a complete, separate operating system. It virtualizes all the hardware — CPU, memory, storage, network — and boots an entire OS on top of that. You're basically running a computer inside your computer. The overhead is significant. A PostgreSQL VM might need 2GB of RAM minimum just to run the OS, before the database does anything.
A container is different. Containers share the host machine's operating system kernel. They don't each run their own OS — they just isolate the filesystem, processes, and networking for one application. This means a PostgreSQL container starts in under a second and uses maybe 50MB of RAM instead of 2GB.
The analogy that clicked for me: a VM is like renting an entire apartment (full kitchen, living room, bedroom, bathroom) when you just want a place to sleep. A container is like renting a room in a shared house. You get your private space, but you share the infrastructure.
Containers are fast, lightweight, and consistent. The "consistent" part is the key word. The same container image runs identically on my MacBook, on a Linux server, in a CI pipeline. The environment is baked into the image.
But running containers individually is still a mess. docker run postgres, then docker run redis, then docker run the app with a wall of flags for ports, volumes, environment variables, network configuration... and if you mistype one flag, you're hunting for a subtle bug. This is where Docker Compose comes in.
Docker Compose lets you define your entire multi-service environment in a single YAML file. Instead of remembering and typing complex docker run commands for each service, you declare what you want and Compose handles the rest.
The mental model: if Docker is a recipe for building one dish, Docker Compose is a full menu that coordinates multiple dishes to arrive at the table together.
Here's the structure of a docker-compose.yml file:
version: '3.8'
services:
# Each service is a container
volumes:
# Persistent storage that survives container restarts
networks:
# How containers talk to each other
Services are the containers you want to run. Each service is an isolated process, but they're all connected on the same network and can talk to each other by service name.
Volumes are persistent storage. Containers are ephemeral by default — delete a container and all data inside it disappears. Volumes live on the host machine and survive container restarts and deletions.
Networks let containers find each other. Instead of localhost:5432, your app can connect to postgres:5432 using the service name as a hostname.
Let me show the actual docker-compose.yml I use for a standard web app stack:
version: '3.8'
services:
postgres:
image: postgres:15-alpine
container_name: dev-postgres
environment:
POSTGRES_USER: devuser
POSTGRES_PASSWORD: devpass
POSTGRES_DB: myapp
ports:
- "5432:5432"
volumes:
- postgres-data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U devuser"]
interval: 5s
timeout: 5s
retries: 5
redis:
image: redis:7-alpine
container_name: dev-redis
ports:
- "6379:6379"
volumes:
- redis-data:/data
command: redis-server --appendonly yes
app:
build:
context: .
dockerfile: Dockerfile.dev
container_name: dev-app
ports:
- "3000:3000"
volumes:
- .:/app
- /app/node_modules
environment:
DATABASE_URL: postgresql://devuser:devpass@postgres:5432/myapp
REDIS_URL: redis://redis:6379
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_started
command: npm run dev
volumes:
postgres-data:
redis-data:
networks:
default:
driver: bridge
One file. Three services. Run docker compose up and the whole stack comes up together. Let me break down the parts that aren't immediately obvious.
volumes:
- .:/app
- /app/node_modules
The first line mounts the current directory into the container at /app. When I edit a file on my laptop, the container sees the change immediately — this is why hot reload works. The container runs the dev server, but it's reading my actual source files.
The second line is the trick I had to learn the hard way. If you just mount .:/app, then the container's /app/node_modules gets overwritten by whatever is in your local directory — which might be compiled for macOS when the container needs Linux binaries. The anonymous volume /app/node_modules tells Docker: keep a separate container-specific copy of node_modules, don't overwrite it with my host version.
Think of it as a shared office space with personal lockers. Everyone works in the same open floor plan, but your specific tools stay in your locker.
The example above has a hardcoded password. In practice, you never do that. Here's the right pattern.
Create a .env file (add it to .gitignore immediately):
# .env
POSTGRES_USER=devuser
POSTGRES_PASSWORD=devpass
POSTGRES_DB=myapp
Reference it in docker-compose.yml:
environment:
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_DB: ${POSTGRES_DB}
Or even simpler with env_file:
env_file:
- .env
Docker Compose automatically reads .env from the same directory, so you can also just reference variables without explicitly listing them.
The reason this matters: environment variables are configuration. Code should be environment-agnostic. The same codebase should be able to connect to a local dev database, a staging database, or a production database by changing the environment, not the code.
This one bit me several times before I understood it properly.
depends_on:
postgres:
condition: service_healthy
depends_on controls startup order. The app service won't start until postgres has satisfied its condition. But there are two possible conditions: service_started (the container is running) and service_healthy (the container has passed its healthcheck).
I learned this the hard way. I had condition: service_started for PostgreSQL. The container started fast — the Docker logs showed it was up. But the app would crash immediately with "connection refused". Why? Because PostgreSQL needs a few seconds after the container starts to actually initialize and begin accepting connections. The container process was running, but the database wasn't ready.
The healthcheck:
healthcheck:
test: ["CMD-SHELL", "pg_isready -U devuser"]
interval: 5s
timeout: 5s
retries: 5
This runs pg_isready every 5 seconds. The service isn't considered healthy until that command succeeds. Switching depends_on to service_healthy means the app won't start until PostgreSQL is actually ready to accept connections.
The analogy: a restaurant opens its doors (started) before it's ready to seat customers (healthy). The staff needs time to set tables, prep the kitchen, get the coffee going. You want to wait until the tables are set before you walk in.
Things will go wrong. A container will crash immediately on startup. A service won't connect to another. An environment variable will be missing. Here's my debugging toolkit in order of frequency:
Logs first. Always start here.
docker compose logs app
docker compose logs -f postgres # follow mode, like tail -f
docker compose logs --tail=50 app # last 50 lines only
Most failures announce themselves clearly in the logs if you know to look.
Shell into the container. When logs aren't enough, get inside.
docker compose exec app sh
docker compose exec postgres psql -U devuser -d myapp
From inside the app container you can test connectivity directly:
# Can we reach postgres from inside the app container?
nc -z postgres 5432 && echo "reachable" || echo "not reachable"
Check environment variables. Many bugs are just missing or wrong env vars.
docker compose exec app env
docker compose exec app env | grep DATABASE
Inspect the full container configuration. When you really need to dig:
docker inspect dev-postgres
This dumps a huge JSON blob with everything Docker knows about the container — network settings, volume mounts, environment variables, the works.
One thing I got wrong early on: using the same Compose file for development and production. Development needs source code mounted, hot reload, debug ports exposed. Production needs none of that — and some of it is actively dangerous.
The right pattern is file composition. You have a base file with shared configuration, then override files for each environment:
# docker-compose.yml — base config, shared across environments
version: '3.8'
services:
postgres:
image: postgres:15-alpine
environment:
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_DB: ${POSTGRES_DB}
volumes:
- postgres-data:/var/lib/postgresql/data
volumes:
postgres-data:
# docker-compose.dev.yml — development overrides
services:
app:
build:
context: .
dockerfile: Dockerfile.dev
volumes:
- .:/app
- /app/node_modules
command: npm run dev
environment:
NODE_ENV: development
# docker-compose.prod.yml — production overrides
services:
app:
build:
context: .
dockerfile: Dockerfile
command: npm start
environment:
NODE_ENV: production
Run them by combining files:
# Development
docker compose -f docker-compose.yml -f docker-compose.dev.yml up
# Production
docker compose -f docker-compose.yml -f docker-compose.prod.yml up -d
The -f flags layer the files — later files override values from earlier ones. This means your base file stays clean and each environment only specifies what's different.
If you see docker-compose (with a hyphen) in old tutorials, that's v1. It was a separate Python tool, installed separately from Docker itself. In 2021, Docker rewrote Compose in Go and integrated it directly into the Docker CLI as docker compose (with a space).
The differences that matter in practice:
docker-compose up → docker compose upDocker officially deprecated v1 in July 2023. If you're starting fresh, use docker compose with a space. If you see old scripts using docker-compose, they'll usually still work because most systems have a compatibility shim, but you should update them.
I've pushed Docker Compose into places it doesn't belong, and I've learned to recognize when it's overkill.
Don't use it for static sites with no backend. Don't use it when your app is a single process with no external dependencies — just use docker run directly or don't use Docker at all. Don't use it for learning exercises where the added complexity will obscure what you're trying to understand. And definitely don't use it as a production orchestrator for multi-server deployments — that's what Kubernetes, ECS, Nomad, and similar tools are built for.
Do use it for any project with multiple services (database, cache, message queue, app server). Use it when you're working on a team and want to eliminate "works on my machine" conversations. Use it in CI pipelines to spin up a real database for integration tests. Use it any time you catch yourself writing a long shell script to start multiple services in the right order.
The honest test: if your project runs with one process and has no external services, Docker Compose adds complexity without benefit. If you have two or more services that need to run together and communicate, Compose pays for itself immediately.
The onboarding time for new developers on my project dropped from two hours to about five minutes. The docker compose up command is in the README, and that's the entire setup section. No more OS-specific instructions, no more "if you're on M1 do this instead", no more tracking down which system library version is incompatible with which npm package.
"Works on my machine" stopped being a sentence anyone said. When a bug appears in production, I can reproduce the environment locally in seconds. When I switch between projects, I don't have conflicting database versions or Redis configurations fighting each other.
The deeper thing I understood was something about what infrastructure-as-code actually means. The docker-compose.yml file is documentation that runs. It's not a README that gets out of date — it's a specification that Docker executes. When you update it, everyone who pulls the latest code gets the updated environment automatically on their next docker compose up.
That's the shift that made everything click. The environment is part of the codebase now. It's version-controlled, reviewable, and reproducible. The thirty-step setup doc is gone. There's one command, and it works.
docker compose up
Go get a coffee. Everything will be ready when you get back.