postgresql.conf tuner
Get optimized PostgreSQL configuration for your specific hardware and workload
For DBAs and developers who want to tune postgresql.conf for their specific server hardware, workload pattern (OLTP, OLAP, or mixed), and PostgreSQL version, moving beyond default settings to production-grade configuration.
About this tool
PostgreSQL ships with conservative default settings designed to run on minimal hardware without crashing. These defaults — 128 MB shared_buffers, 4 MB work_mem, 100 max_connections — are intentionally safe but leave enormous performance on the table for production workloads. Tuning postgresql.conf correctly for your specific hardware and workload can deliver 2x–10x throughput improvements without changing a single query.
The challenge is that PostgreSQL has over 300 configuration parameters, and many of them interact in non-obvious ways. Setting shared_buffers too high on Linux can cause double-buffering with the OS page cache — PostgreSQL and the kernel end up caching the same 8 kB pages independently, wasting memory that could serve your working set. Increasing work_mem helps sort-heavy analytical queries but can cause out-of-memory kills when multiplied across max_connections and parallel workers. Aggressive checkpoint settings improve write throughput but risk longer recovery times after a crash. Every setting involves tradeoffs that depend on your hardware profile, workload characteristics, and reliability requirements.
Memory tuning alone involves a careful balancing act. shared_buffers typically performs best at 25% of RAM on Linux, but the right value depends on your active data set size and access patterns. effective_cache_size should reflect the total memory available for caching (shared_buffers plus OS page cache) so the query planner makes accurate cost estimates for index scans. work_mem must be sized conservatively because a single complex query can allocate multiple work_mem-sized buffers for each sort and hash operation, and hundreds of concurrent connections multiply that further. maintenance_work_mem affects the speed of VACUUM, CREATE INDEX, and ALTER TABLE ADD FOREIGN KEY — setting it higher (1–4 GB) dramatically speeds these operations without the concurrency risk of work_mem. On systems with large shared_buffers (8 GB+), enabling huge_pages eliminates thousands of TLB misses per second and measurably reduces CPU overhead for memory-intensive workloads.
WAL and checkpoint configuration is equally critical for write-heavy systems. The default max_wal_size of 1 GB forces frequent checkpoints that create I/O spikes, stalling concurrent queries. Increasing it to 4–16 GB and setting checkpoint_completion_target to 0.9 spreads the write load evenly. PostgreSQL 15 introduced zstd WAL compression, which can cut WAL volume in half with negligible CPU cost — this benefits both local I/O and streaming replication bandwidth. For storage-level tuning, random_page_cost should drop from the default 4.0 to 1.1 on SSDs, and effective_io_concurrency should increase to 200 for NVMe drives, telling the planner and bitmap heap scans to issue many concurrent prefetch requests.
This tool generates a tuned postgresql.conf based on your specific environment. Provide your total RAM, CPU cores, storage type (SSD vs. spinning disk), workload pattern (OLTP, OLAP, or mixed), expected connection count, and PostgreSQL version. The AI produces concrete parameter values with detailed explanations of why each value was chosen, what tradeoff it represents, and how to validate the change in production. It covers memory allocation, connection handling, WAL and checkpoint tuning, query planner cost parameters, parallelism, autovacuum settings for high-churn tables, and logging configuration for production observability.
Unlike static configuration calculators that apply fixed formulas, this tool adapts its recommendations to your workload. An OLTP system serving short transactions through a connection pooler gets very different advice than a data warehouse running complex analytical queries with few concurrent users. The tool also accounts for version-specific features: huge_pages behavior changes across versions, new parameters introduced in PostgreSQL 14 through 17, deprecated settings that should be removed during upgrades, and changed defaults that may silently alter behavior after a major version update. Every recommendation includes the rationale and a safety note so you understand the risk profile before applying changes to production.
Examples
# Hardware: 64 GB RAM, 16 cores, NVMe SSD
# Workload: OLTP (web application backend)
# PostgreSQL 16, pgBouncer in front, ~50 active queries max
# Memory
shared_buffers = 16GB
effective_cache_size = 48GB
work_mem = 64MB
maintenance_work_mem = 2GB
huge_pages = try
# WAL
wal_buffers = 64MB
max_wal_size = 4GB
min_wal_size = 1GB
checkpoint_completion_target = 0.9
wal_compression = zstd
# Planner
random_page_cost = 1.1
effective_io_concurrency = 200
seq_page_cost = 1.0
# Connections
max_connections = 200
# Parallelism
max_parallel_workers_per_gather = 2
max_parallel_workers = 8
max_parallel_maintenance_workers = 4
# Autovacuum
autovacuum_max_workers = 4
autovacuum_vacuum_cost_limit = 2000
autovacuum_vacuum_scale_factor = 0.02
autovacuum_analyze_scale_factor = 0.01
Sample tuned configuration for a 64 GB OLTP server on NVMe. shared_buffers at 25% of RAM, effective_cache_size at 75%, work_mem conservative because OLTP means many concurrent sessions. random_page_cost lowered to 1.1 for SSD. wal_compression set to zstd (PostgreSQL 15+) to reduce WAL volume.
# Hardware: 256 GB RAM, 64 cores, NVMe SSD array
# Workload: OLAP (data warehouse, few concurrent users)
# PostgreSQL 17
# Memory
shared_buffers = 64GB
effective_cache_size = 200GB
work_mem = 1GB
maintenance_work_mem = 4GB
huge_pages = on
hash_mem_multiplier = 2.0
# WAL
wal_buffers = 64MB
max_wal_size = 16GB
checkpoint_completion_target = 0.9
# Planner
random_page_cost = 1.1
effective_io_concurrency = 200
max_parallel_workers_per_gather = 8
max_parallel_workers = 32
max_parallel_maintenance_workers = 8
# JIT
jit = on
jit_above_cost = 100000
# Autovacuum
autovacuum_max_workers = 6
autovacuum_vacuum_cost_limit = 4000
Sample tuned configuration for a 256 GB OLAP server. work_mem is much higher (1 GB) because few concurrent users run complex queries with large sorts and hash joins. Parallelism is aggressive with 8 workers per gather. hash_mem_multiplier (PG 13+) allows hash operations to use 2x work_mem. huge_pages set to "on" (not "try") because the server should be configured with sufficient huge pages for 64 GB shared_buffers.
Inputs and outputs
What you provide
- Total RAM, CPU cores, and storage type (SSD/HDD/NVMe)
- Workload pattern (OLTP, OLAP, or mixed)
- PostgreSQL version
- Expected connection count and whether a connection pooler is used
- Optionally, current postgresql.conf for review
What you get
- Complete tuned postgresql.conf parameter set grouped by category
- Explanation of rationale and tradeoffs for each setting
- Restart vs. reload classification for each parameter
- Validation queries to confirm settings are active
Use cases
- Configuring a new PostgreSQL server for production, moving beyond default settings to hardware-appropriate values
- Re-tuning an existing server after a hardware upgrade (more RAM, migration from HDD to NVMe SSD, additional CPU cores)
- Optimizing postgresql.conf for a specific workload pattern — high-throughput OLTP, heavy analytical OLAP, or a mixed workload
- Preparing postgresql.conf changes for a major version upgrade, adopting new parameters and removing deprecated ones
- Tuning autovacuum for high-write workloads where default autovacuum settings cannot keep up with dead tuple accumulation
- Right-sizing connection limits and memory parameters to prevent out-of-memory conditions under peak load
Features
- Generates concrete parameter values for 30+ settings based on your RAM, CPU, storage, and workload type
- Distinguishes OLTP, OLAP, and mixed workload patterns with fundamentally different tuning strategies
- Explains the rationale and tradeoffs for every recommended setting, not just the value
- Accounts for version-specific parameters and behavior changes across PostgreSQL 13 through 17
- Includes autovacuum tuning for high-churn tables with per-table override recommendations
- Warns about dangerous combinations (e.g., high work_mem with high max_connections) and OOM risk
- Provides validation queries to confirm settings are taking effect and performing as expected
Frequently asked questions
How should I set shared_buffers and why not use all available RAM?
The common starting point is 25% of total RAM for shared_buffers, but the optimal value depends on your workload and operating system. PostgreSQL relies heavily on the OS page cache for reading data, so allocating too much to shared_buffers starves the OS cache and can actually hurt performance through double-buffering — the same pages end up cached in both shared_buffers and the kernel page cache. On a 64 GB server, 16 GB for shared_buffers is a solid default, leaving ~48 GB for the OS to cache the rest of your data files, WAL, temp files, and sort spills. For very large databases (multiple TB), some workloads benefit from going higher — up to 40% of RAM — especially if the active working set fits in shared_buffers. On Windows, keep shared_buffers lower (around 512 MB to 1 GB) due to how Windows manages large memory allocations. After changing shared_buffers, set effective_cache_size to approximately 75% of total RAM — this does not allocate memory but tells the planner how much total cache (shared_buffers + OS cache) is likely available, which affects its cost estimates for index scans vs. sequential scans. Always enable huge_pages (set to try or on) when shared_buffers exceeds 8 GB on Linux to reduce TLB pressure and improve memory access performance.
How do I tune work_mem without causing out-of-memory problems?
work_mem controls the amount of memory each sort or hash operation can use before spilling to disk. The danger is that this limit applies *per operation per query per connection* — a single complex query with four sort nodes and two hash joins might allocate up to 6x work_mem, and if 100 connections run similar queries simultaneously, total memory usage can reach 600x work_mem. This is why the default of 4 MB is so conservative. For OLTP workloads with many concurrent connections, keep work_mem between 16 MB and 64 MB and use SET work_mem at the session or transaction level for known expensive queries. For OLAP workloads with few concurrent users running complex analytical queries, work_mem can be 256 MB to 2 GB because the connection count is low. A practical formula is: (Total RAM - shared_buffers) / (max_connections * 3) as an upper bound, then reduce by 50% for safety. Monitor with EXPLAIN (ANALYZE, BUFFERS) — look for Sort Method: external merge or hash Batches: >1, which indicate work_mem is too low for that query. PostgreSQL 13+ introduced hash_mem_multiplier (default 2.0 in PG 15+) which allows hash operations to use a multiple of work_mem without affecting sort allocations, giving you finer control.
What WAL and checkpoint settings matter most for write-heavy workloads?
For write-heavy workloads, the most impactful settings are max_wal_size, checkpoint_completion_target, wal_buffers, and wal_compression. max_wal_size (default 1 GB) controls how much WAL accumulates before a checkpoint is triggered — setting it to 4–16 GB reduces checkpoint frequency, which smooths out I/O spikes but increases crash recovery time. checkpoint_completion_target (default 0.9 since PG 14, was 0.5 before) spreads dirty page writes across the checkpoint interval rather than flushing them all at once, dramatically reducing I/O stalls. wal_buffers should be set to 64 MB for write-heavy systems (the default auto-sizing of 1/32 of shared_buffers is usually fine but capping at 64 MB prevents excess). PostgreSQL 15 introduced wal_compression = zstd which can reduce WAL volume by 50–70% for workloads with large tuple modifications — this saves disk I/O and speeds up replication with minimal CPU overhead. For replicated setups, also consider max_wal_senders, wal_keep_size, and wal_sender_timeout. Monitor checkpoint behavior with log_checkpoints = on and watch for messages about checkpoints occurring too frequently — if the log says "checkpoints are occurring too frequently," increase max_wal_size.
How should I configure PostgreSQL differently for OLTP vs. OLAP workloads?
OLTP and OLAP workloads have fundamentally different resource profiles, and tuning them identically wastes capacity. OLTP workloads (web apps, transactional systems) have many concurrent short-lived queries touching small numbers of rows. They need: lower work_mem (16–64 MB) because hundreds of sessions multiply it, higher max_connections (or better, connection pooling with PgBouncer), conservative parallelism (max_parallel_workers_per_gather = 2 or 0 for latency-sensitive queries), and aggressive autovacuum to keep tables bloat-free for fast index lookups. OLAP workloads (data warehouses, analytics) have few concurrent users running complex queries scanning millions of rows. They need: high work_mem (256 MB–2 GB) for large sorts and hash joins, aggressive parallelism (max_parallel_workers_per_gather = 4–8), larger maintenance_work_mem for index builds and VACUUM, JIT compilation enabled for long-running queries (jit_above_cost tuned to your typical query cost), and max_wal_size set high (8–16 GB) because bulk loads generate enormous WAL. Mixed workloads are the hardest — use connection pooling, set conservative global defaults, and apply aggressive settings per-session for analytical queries using SET LOCAL.
What version-specific postgresql.conf changes should I know about for PostgreSQL 14 through 17?
Each major version introduces new parameters and changes defaults. In PostgreSQL 14: idle_in_transaction_session_timeout is available for killing abandoned sessions, huge_pages_status can be queried to verify huge pages are active, and compute_query_id enables query identification for pg_stat_statements without the extension loaded. The default for checkpoint_completion_target changed from 0.5 to 0.9 — if upgrading from PG 13 or earlier, remove any explicit setting of 0.9 (it is now the default) or update if you had 0.5. In PostgreSQL 15: wal_compression now supports zstd and lz4 in addition to pglz, and shared_memory_type on Linux defaults to mmap. The stats_temp_directory parameter was removed — delete it from your config if present. PostgreSQL 16 introduced reserved_connections for superuser access (complementing superuser_reserved_connections), log_checkpoints defaults to on, and allow_alter_system can lock down ALTER SYSTEM. PostgreSQL 17 brought wal_summary for incremental backups, improved default I/O concurrency handling, and io_combine_limit for controlling read-ahead. Always review the release notes for your target version and remove deprecated parameters — leftover settings from older versions can prevent PostgreSQL from starting after an upgrade.
Related tools
Related resources
Ready to try it?
Use this tool for free — powered by PostgresAI.