Skip to main content

Lock contention analyzer

Find what is blocking your queries and resolve lock contention in PostgreSQL

For people who have lock contention, blocked queries, deadlocks, or DDL operations that hang, and need to diagnose which sessions hold conflicting locks and how to resolve the situation safely.

About this tool

Lock contention is one of the most disruptive problems in production PostgreSQL. A single long-running transaction holding an AccessExclusiveLock can block every other session trying to access the table — and those blocked sessions queue up behind it, cascading into a full application outage. Understanding PostgreSQL's lock system is essential for anyone operating a database under concurrent workloads.

PostgreSQL uses a multi-granularity locking system with eight lock modes, from the permissive AccessShareLock (acquired by select queries) to the exclusive AccessExclusiveLock (acquired by DROP TABLE, TRUNCATE, VACUUM FULL, and most ALTER TABLE variants). Between these extremes lie RowShareLock, RowExclusiveLock, ShareUpdateExclusiveLock, ShareLock, ShareRowExclusiveLock, and ExclusiveLock, each with a specific conflict matrix that determines which operations can proceed concurrently and which must wait. The conflict matrix is not symmetric and not intuitive — for instance, RowExclusiveLock (held by any insert, update, or delete) does not conflict with itself, which is why multiple sessions can write concurrently, but it does conflict with ShareLock (held by non-concurrent CREATE INDEX), which is why index creation blocks writes unless you use the CONCURRENTLY option.

This tool helps you diagnose lock contention by analyzing the output of pg_locks, pg_stat_activity, and related system views. It identifies blocking and blocked process chains, explains which lock types conflict and why, and provides concrete steps to resolve the contention — whether that means canceling a query, terminating a session, restructuring a migration, or adjusting lock_timeout and deadlock_timeout settings. The diagnostic process starts with querying pg_locks to see all currently held and awaited locks, joining with pg_stat_activity to correlate locks with the queries and sessions that hold them, and using pg_blocking_pids() (PostgreSQL 9.6+) to quickly identify which processes are blocking which.

Beyond simple two-session blocking scenarios, real-world lock problems often involve multi-level wait chains: session A holds a lock, session B waits on A, and sessions C through Z queue behind B. DDL operations are especially dangerous because they require AccessExclusiveLock, and even a brief wait for that lock blocks all new queries behind the DDL in the lock queue — this is the notorious "lock queue stampede" problem. A common scenario: an ALTER TABLE waits for a long-running select to finish, and while it waits, every new select also queues behind the ALTER TABLE. Within seconds, your connection pool is exhausted. This tool traces these chains and recommends the safest resolution path.

The tool also covers advisory locks (pg_advisory_lock, pg_try_advisory_lock), which applications use for distributed coordination, and deadlocks, where PostgreSQL's deadlock detector terminates one of the participants after deadlock_timeout (default 1 second). It explains common deadlock patterns — such as concurrent updates that touch the same rows in different order, or foreign key checks that acquire locks on the referenced table — how to read the deadlock log messages, and how to restructure transactions to prevent recurrence.

For common DDL-heavy operations — creating indexes, adding columns with defaults, changing column types — the tool recommends lock-safe patterns such as CREATE INDEX CONCURRENTLY, using short lock_timeout values with retries, and splitting migrations into lock-friendly steps that minimize the window of exclusive locking. It also explains the interaction between lock_timeout, statement_timeout, and deadlock_timeout, and how to configure each for different operational scenarios.

Safety notes:
  • pg_terminate_backend forcibly closes a connection and rolls back its open transaction — any in-flight work is lost
  • pg_cancel_backend only cancels the current query and is the safer first option; it does not close the connection
  • Never use OS-level kill -9 on PostgreSQL backends — it can leave shared memory in an inconsistent state and may require a full server restart
  • Terminating a session running a DDL migration may leave the schema in a partially migrated state depending on the operation
  • Setting lock_timeout too low on application connections can cause legitimate queries to fail under normal contention

Examples

-- Find all blocked processes and what is blocking them
select
blocked_locks.pid as blocked_pid,
blocked_activity.usename as blocked_user,
blocked_activity.query as blocked_query,
blocked_activity.wait_event_type,
blocked_activity.wait_event,
blocking_locks.pid as blocking_pid,
blocking_activity.usename as blocking_user,
blocking_activity.query as blocking_query,
blocking_activity.state as blocking_state,
now() - blocking_activity.query_start as blocking_duration
from pg_catalog.pg_locks blocked_locks
join pg_catalog.pg_stat_activity blocked_activity
on blocked_activity.pid = blocked_locks.pid
join pg_catalog.pg_locks blocking_locks
on blocking_locks.locktype = blocked_locks.locktype
and blocking_locks.database is not distinct from blocked_locks.database
and blocking_locks.relation is not distinct from blocked_locks.relation
and blocking_locks.page is not distinct from blocked_locks.page
and blocking_locks.tuple is not distinct from blocked_locks.tuple
and blocking_locks.virtualxid is not distinct from blocked_locks.virtualxid
and blocking_locks.transactionid is not distinct from blocked_locks.transactionid
and blocking_locks.classid is not distinct from blocked_locks.classid
and blocking_locks.objid is not distinct from blocked_locks.objid
and blocking_locks.objsubid is not distinct from blocked_locks.objsubid
and blocking_locks.pid != blocked_locks.pid
join pg_catalog.pg_stat_activity blocking_activity
on blocking_activity.pid = blocking_locks.pid
where not blocked_locks.granted
order by blocking_duration desc;

The standard blocking query that joins pg_locks with pg_stat_activity to show every blocked session alongside the session that blocks it, including the exact queries and how long the blocker has been running. Paste the output and the AI will trace the lock chain and recommend a resolution.

-- Simpler approach using pg_blocking_pids() (PostgreSQL 9.6+)
select
pid,
usename,
pg_blocking_pids(pid) as blocked_by,
query as blocked_query,
wait_event_type,
wait_event,
state,
now() - query_start as duration
from pg_stat_activity
where cardinality(pg_blocking_pids(pid)) > 0
order by duration desc;

A simpler alternative using the pg_blocking_pids() function (available since PostgreSQL 9.6) that returns an array of PIDs blocking each waiting session. Faster to type during an incident but provides less detail about the specific lock types involved.

Inputs and outputs

What you provide

  • Output of pg_locks and pg_stat_activity queries showing current lock state
  • PostgreSQL log entries for deadlock events (with DETAIL and CONTEXT lines)
  • Description of the DDL operation or migration that is blocked or being planned
  • PostgreSQL version and approximate table sizes for DDL planning

What you get

  • Lock wait chain showing blocker-to-blocked relationships
  • Identification of conflicting lock types with explanation from the conflict matrix
  • Safe resolution steps (pg_cancel_backend / pg_terminate_backend with appropriate warnings)
  • Preventive recommendations: lock_timeout, transaction design, DDL patterns

Use cases

  • Diagnosing production incidents where queries are piling up and the application is unresponsive due to lock contention
  • Identifying which session holds the lock that is blocking a DDL migration (ALTER TABLE, CREATE INDEX) from proceeding
  • Analyzing deadlock log entries from PostgreSQL to understand why two transactions deadlocked and how to prevent recurrence
  • Planning safe DDL operations on high-traffic tables by understanding lock types, queue behavior, and lock_timeout strategies
  • Investigating advisory lock leaks where application-level pg_advisory_lock calls are not properly released
  • Tracing multi-level lock wait chains where a single long-running transaction cascades into dozens of blocked sessions

Features

  • Parses pg_locks and pg_stat_activity output to build a visual lock wait chain showing blocker-to-blocked relationships
  • Explains the PostgreSQL lock conflict matrix — which lock modes conflict with which, and why a specific operation is waiting
  • Identifies safe resolution paths: pg_cancel_backend for query cancellation vs. pg_terminate_backend for session termination, with clear guidance on when to use each
  • Recommends lock-safe DDL patterns including CREATE INDEX CONCURRENTLY, short lock_timeout with retries, and phased ALTER TABLE
  • Analyzes deadlock detector output and explains the cycle of locks that caused the deadlock
  • Covers advisory lock diagnostics — finding unreleased advisory locks and identifying the sessions that hold them

Frequently asked questions

What are the PostgreSQL lock modes and which operations acquire them?

PostgreSQL has eight table-level lock modes arranged from least to most restrictive. AccessShareLock is acquired by select and conflicts only with AccessExclusiveLock. RowShareLock is acquired by select for update and select for share. RowExclusiveLock is acquired by insert, update, delete, and merge. ShareUpdateExclusiveLock is acquired by VACUUM (non-FULL), ANALYZE, CREATE INDEX CONCURRENTLY, CREATE STATISTICS, and ALTER TABLE VALIDATE CONSTRAINT — it conflicts with itself, preventing two concurrent VACUUM operations on the same table. ShareLock is acquired by CREATE INDEX (non-concurrently) and conflicts with RowExclusiveLock, which is why non-concurrent index creation blocks writes. ShareRowExclusiveLock is acquired by CREATE TRIGGER and some ALTER TABLE forms. ExclusiveLock is acquired by REFRESH MATERIALIZED VIEW CONCURRENTLY. AccessExclusiveLock is the most restrictive — acquired by DROP TABLE, TRUNCATE, REINDEX, VACUUM FULL, and most ALTER TABLE commands — and it conflicts with every other lock mode, including AccessShareLock, meaning even select queries are blocked. Understanding this hierarchy and the full conflict matrix is critical for planning migrations on live systems.

How do I safely resolve lock contention during a production incident?

First, identify the head of the lock chain — the session that is not itself waiting on anything but is blocking others. Use pg_blocking_pids() or the full pg_locks join query to trace the chain. Then decide between pg_cancel_backend(pid) and pg_terminate_backend(pid). pg_cancel_backend sends a query cancellation signal: if the blocking session is actively running a query, it will cancel that query but keep the session connected, preserving the transaction context. This is the safer option. pg_terminate_backend forcibly closes the entire connection, rolling back any open transaction — use it when the blocking session is idle in transaction or when pg_cancel_backend does not resolve the situation. Before terminating, check the blocking query: if it is a critical data migration, terminating it may leave your data in an inconsistent state. Always set statement_timeout and lock_timeout on DDL sessions to prevent future incidents. A pattern like set lock_timeout = '3s'; ALTER TABLE ...; reset lock_timeout; ensures your DDL will fail fast rather than queue up and block the world.

Why does my ALTER TABLE hang even though no other queries are running?

The most common cause is a forgotten open transaction. A session that ran a select on the table and never committed or rolled back still holds an AccessShareLock. This lock conflicts with the AccessExclusiveLock that ALTER TABLE needs. The session may appear idle in pg_stat_activity with state = 'idle in transaction', and because it is not actively running a query, it is easy to overlook. Check for such sessions with select * from pg_stat_activity where state = 'idle in transaction' and now() - state_change > interval '1 minute'. Another cause is PostgreSQL's lock queue fairness: even if the idle-in-transaction session is short, once your ALTER TABLE starts waiting for the AccessExclusiveLock, all *new* queries that need AccessShareLock queue behind the ALTER TABLE, not behind the idle session. This creates a snowball effect where the entire table becomes inaccessible. The solution is to either terminate the idle-in-transaction session or set idle_in_transaction_session_timeout to automatically roll back abandoned transactions. For the ALTER TABLE itself, always use set lock_timeout = '5s' so it fails fast and you can retry rather than blocking everything.

How do deadlocks happen in PostgreSQL and how do I prevent them?

A deadlock occurs when two or more transactions each hold a lock that the other needs to proceed. For example, transaction A updates row 1 then tries to update row 2, while transaction B updates row 2 then tries to update row 1. Neither can proceed because each is waiting for the other to release its lock. PostgreSQL detects deadlocks by running a cycle-detection algorithm after a waiting session has been blocked for deadlock_timeout (default 1 second). It then terminates one of the transactions with the error ERROR: deadlock detected. The log entry includes DETAIL and CONTEXT lines showing the exact lock cycle. The most effective prevention strategy is consistent lock ordering: if all transactions that touch rows in a given table acquire locks in the same order (e.g., by primary key ascending), cycles become impossible. For bulk updates, sort your batch by the primary key before applying. For transactions that touch multiple tables, establish a global table ordering convention. You can also reduce deadlock likelihood by keeping transactions short, avoiding select for update when not necessary, and using SKIP LOCKED for queue-like patterns where strict ordering is not required.

Related tools

Related resources

Ready to try it?

Use this tool for free — powered by PostgresAI.