Docker Compose for PostgreSQL generator
Generate complete docker-compose.yml configurations for PostgreSQL with AI assistance
For developers and DevOps engineers who need a Docker Compose file to run PostgreSQL — whether for local development, CI pipelines, or production-like environments — and want correct volume mounts, healthchecks, environment variables, and multi-service setups.
About this tool
Setting up PostgreSQL in Docker Compose seems straightforward until you encounter the subtleties: data disappearing because you forgot a volume mount, containers starting before PostgreSQL is ready to accept connections, init scripts running in the wrong order, or configuration files silently ignored because of incorrect mount paths. This tool generates complete, tested docker-compose.yml files that handle these details correctly from the start.
The generator understands the official postgres Docker image deeply — its entrypoint behavior, the /docker-entrypoint-initdb.d/ initialization directory (which runs .sql, .sql.gz, and .sh files in filename-sorted order, but only on first container creation when the data directory is empty), the environment variables that control database creation (POSTGRES_PASSWORD, POSTGRES_USER, POSTGRES_DB, POSTGRES_INITDB_ARGS), and the signal handling that ensures clean shutdown with stop_grace_period. It knows the difference between the Debian-based (postgres:17-bookworm) and Alpine-based (postgres:17-alpine) image variants, including the trade-offs in image size, available system packages, locale support, and debugging tools.
Beyond single-container setups, the tool generates multi-service compositions: PostgreSQL with PgBouncer for connection pooling (in session, transaction, or statement mode), PostgreSQL with pgAdmin for web-based management, primary-replica streaming replication pairs with proper pg_hba.conf replication entries, and PostgreSQL paired with application services that depend on database readiness. Each generated configuration includes proper healthcheck definitions using pg_isready with the correct user and database arguments, dependency ordering with depends_on conditions using service_healthy (not just service_started), named volumes for data persistence, and network isolation via custom bridge networks.
The tool also handles the operational concerns that distinguish development configurations from CI and production setups. Development configurations prioritize convenience — exposed ports bound to 127.0.0.1, relaxed authentication, auto-created databases, and mounted init scripts that seed test data. CI configurations emphasize speed and isolation — tmpfs mounts for ephemeral data that bypass disk I/O entirely, minimal logging with log_min_messages=warning, aggressive resource settings like fsync=off and synchronous_commit=off (safe only when data loss is acceptable), and parallelism-friendly port allocation using ${PGPORT:-5432} variable substitution. Production-like configurations focus on resilience — deploy.resources.limits for memory and CPU, restart: unless-stopped policies, Docker logging drivers with max-size and max-file rotation, custom postgresql.conf and pg_hba.conf mounting via the command: postgres -c config_file=... pattern, and backup integration with pg_dump cron sidecars or continuous archiving via pg_basebackup containers.
Security is handled throughout: passwords are never hardcoded but referenced via ${POSTGRES_PASSWORD:?error} syntax that fails fast if the .env file is missing, sensitive environment variables can be sourced from Docker secrets in Swarm-compatible configurations, and port mappings include comments warning about host-network exposure. The generated .env.example template documents every required variable without containing actual secrets.
Every generated file uses the Compose Specification format (the modern standard that supersedes the versioned v2/v3 formats), includes inline comments explaining non-obvious choices, and validates against common mistakes like using POSTGRES_PASSWORD with an empty value, mounting a configuration file as a directory, or placing data volumes on the macOS filesystem translation layer where they perform poorly. The output is designed to be copy-pasted into a project repository and used immediately — no manual fixups required. Whether you are containerizing PostgreSQL for the first time or migrating a fragile docker run one-liner into a maintainable, version-controlled Compose file, this tool produces configurations that work correctly on the first docker compose up.
Examples
services:
postgres:
image: postgres:17
container_name: my_postgres
environment:
POSTGRES_USER: appuser
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:?error}
POSTGRES_DB: appdb
volumes:
- pgdata:/var/lib/postgresql/data
- ./init-scripts:/docker-entrypoint-initdb.d:ro
ports:
- "5432:5432"
healthcheck:
test: ["CMD-SHELL", "pg_isready -U appuser -d appdb"]
interval: 5s
timeout: 5s
retries: 5
start_period: 10s
restart: unless-stopped
volumes:
pgdata:
A foundational single-service PostgreSQL setup with a named volume for persistence, init script directory for schema seeding, healthcheck for readiness, and an environment variable reference that fails fast if the password is not set.
services:
postgres:
image: postgres:17
environment:
POSTGRES_USER: appuser
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:?error}
POSTGRES_DB: appdb
volumes:
- pgdata:/var/lib/postgresql/data
- ./custom-postgresql.conf:/etc/postgresql/postgresql.conf:ro
- ./init-scripts:/docker-entrypoint-initdb.d:ro
command: postgres -c config_file=/etc/postgresql/postgresql.conf
healthcheck:
test: ["CMD-SHELL", "pg_isready -U appuser -d appdb"]
interval: 5s
timeout: 5s
retries: 5
start_period: 10s
deploy:
resources:
limits:
memory: 2g
cpus: "2.0"
restart: unless-stopped
pgbouncer:
image: bitnami/pgbouncer:latest
environment:
POSTGRESQL_HOST: postgres
POSTGRESQL_PORT: "5432"
POSTGRESQL_USERNAME: appuser
POSTGRESQL_PASSWORD: ${POSTGRES_PASSWORD:?error}
POSTGRESQL_DATABASE: appdb
PGBOUNCER_POOL_MODE: transaction
PGBOUNCER_MAX_CLIENT_CONN: "200"
PGBOUNCER_DEFAULT_POOL_SIZE: "20"
ports:
- "6432:6432"
depends_on:
postgres:
condition: service_healthy
restart: unless-stopped
volumes:
pgdata:
A production-oriented setup with PostgreSQL behind PgBouncer in transaction pooling mode. The custom postgresql.conf is mounted as a file (not a directory) and passed via the command override. Resource limits constrain memory and CPU usage. PgBouncer only starts after PostgreSQL passes its healthcheck.
Inputs and outputs
What you provide
- Target environment (development, CI, production-like)
- PostgreSQL version preference
- Additional services needed (PgBouncer, pgAdmin, replicas, app containers)
- Custom configuration requirements (postgresql.conf, init scripts, resource limits)
What you get
- Complete docker-compose.yml with inline comments
- Supporting file templates (.env, init scripts, custom postgresql.conf)
- Getting started commands and common operations reference
Use cases
- Creating a local development environment with PostgreSQL, seeded with initial schema and test data via init scripts
- Setting up CI pipeline databases with ephemeral storage and health-gated service dependencies
- Building a multi-service stack with PostgreSQL, PgBouncer connection pooling, and pgAdmin for visual management
- Configuring PostgreSQL streaming replication with a primary and one or more read replicas for testing HA scenarios
- Generating production-like configurations with custom postgresql.conf, resource limits, log rotation, and backup sidecars
- Migrating from a manual docker run command to a reproducible, version-controlled Compose file
Features
- Generates valid Compose Specification YAML with inline comments explaining each configuration choice
- Configures named volumes for data persistence and correctly handles the postgres image init-on-empty-datadir behavior
- Includes pg_isready healthchecks with proper interval, timeout, and retries so dependent services wait for real readiness
- Supports multi-service setups: PgBouncer, pgAdmin, replication, application containers with depends_on conditions
- Mounts custom postgresql.conf and pg_hba.conf correctly (as files, not directories) with appropriate container paths
- Adapts output for development, CI, and production contexts with environment-appropriate defaults
- Handles init script ordering, extension installation, and locale configuration via POSTGRES_INITDB_ARGS
Frequently asked questions
Why does my PostgreSQL data disappear when I recreate the container?
The PostgreSQL data directory inside the container is at /var/lib/postgresql/data. Without a volume mount, this directory lives in the container's writable layer and is destroyed when the container is removed (docker compose down without -v, or docker rm). To persist data, mount a named volume: volumes: [pgdata:/var/lib/postgresql/data] and declare it in the top-level volumes: section. Named volumes survive container removal — only docker compose down -v or docker volume rm explicitly deletes them. A common mistake is mounting a bind-mount directory (like ./data:/var/lib/postgresql/data) on macOS or Windows, which works but is significantly slower than named volumes due to the filesystem translation layer in Docker Desktop. For development on macOS, named volumes give substantially better I/O performance because they are stored in the Linux VM rather than translated through VirtioFS or gRPC-FUSE. For CI, consider tmpfs mounts (tmpfs: [/var/lib/postgresql/data]) to avoid disk I/O entirely and speed up test runs — but understand the data is gone when the container stops. Note that docker compose down without -v preserves volumes, while docker compose down -v deletes them permanently.
How do init scripts in /docker-entrypoint-initdb.d/ work, and why are they not running?
The official postgres image runs files in /docker-entrypoint-initdb.d/ only once — on the very first startup when the data directory (/var/lib/postgresql/data) is empty. It processes .sql, .sql.gz, and .sh files in filename-sorted order (locale-dependent, so prefix with numbers like 01-schema.sql, 02-seed.sql). The most common reason scripts do not run is that the data directory already contains data from a previous run, so the entrypoint skips initialization entirely. To re-run init scripts, you must remove the volume: docker compose down -v && docker compose up. Other causes: the mount path is wrong (e.g., mounting a single file instead of a directory, or a typo in the path), the .sql file has a syntax error that causes the entrypoint to abort, or the script has incorrect file permissions (.sh files need to be executable). Check docker compose logs postgres for error messages during initialization. For scripts that should run on every startup (not just the first), use the command override or a custom entrypoint wrapper instead.
How do I mount a custom postgresql.conf in Docker Compose?
There are two reliable approaches. The first and most common is to mount the file as a read-only bind mount to a custom path and tell PostgreSQL to use it via the command override: volumes: [./postgresql.conf:/etc/postgresql/postgresql.conf:ro] with command: postgres -c config_file=/etc/postgresql/postgresql.conf. The :ro flag prevents accidental writes. The second approach is to use command to pass individual settings directly: command: postgres -c shared_buffers=256MB -c work_mem=16MB -c max_connections=100. This is simpler for a few settings but becomes unwieldy for many. A critical mistake is mounting the file directly to /var/lib/postgresql/data/postgresql.conf — this fails because the data directory is initialized by the entrypoint, and the mount conflicts with the initialization process. Similarly, do not mount a directory to a file path or vice versa; Docker will create a directory if the source does not exist, and PostgreSQL will fail to read a directory as a configuration file. For pg_hba.conf, mount it to a custom path and reference it via command: postgres -c hba_file=/etc/postgresql/pg_hba.conf.
Should I use the version key in my docker-compose.yml file?
No. The top-level version key (e.g., version: "3.8") is obsolete and ignored by modern Docker Compose (v2.x CLI, which replaced the older v1 Python-based docker-compose). The Compose Specification — the current standard — does not use a version key. Including it triggers a deprecation warning in recent Docker Compose versions and has no effect on behavior. The historical v2/v3 versioning caused significant confusion because version: "3" actually removed features from v2 (like mem_limit and condition in depends_on) in favor of Docker Swarm's deploy.resources syntax. The modern Compose Specification merges the best of both: deploy.resources.limits works for both local and Swarm deployments, depends_on supports condition: service_healthy, and features like profiles, extends, variable interpolation, and YAML anchors are all available without worrying about version compatibility. Simply start your file with services: as the top-level key. If you are following an older tutorial that includes version: "3.x", remove that line and everything else should work as-is with current Docker Compose. The Compose CLI v2 (invoked as docker compose with a space, not the hyphenated docker-compose) is the actively maintained version and ships with Docker Desktop by default.
Related tools
Related resources
Ready to try it?
Use this tool for free — powered by PostgresAI.