Skip to main content

· 11 min read
Nikolay Samokhvalov

The setting max_locks_per_transaction is mysterious, it is a good illustration of Socrates' "I know that I know nothing". This is the main fact to memorize about max_locks_per_transaction. Don't try to remember details. Unless you touch it often, you'll forget (I do). Instead, let's rely on the docs:

The shared lock table has space for max_locks_per_transaction objects (e.g., tables) per server process or prepared transaction; hence, no more than this many distinct objects can be locked at any one time. This parameter limits the average number of object locks used by each transaction; individual transactions can lock more objects as long as the locks of all transactions fit in the lock table. This is not the number of rows that can be locked; that value is unlimited. The default, 64, has historically proven sufficient, but you might need to raise this value if you have queries that touch many different tables in a single transaction, e.g., query of a parent table with many children. This parameter can only be set at server start.

When running a standby server, you must set this parameter to have the same or higher value as on the primary server. Otherwise, queries will not be allowed in the standby server.

Unless you read academic papers every day, you'll need a few minutes to understand what's written here.

Let's take even more time and understand every single detail, and also cover what's not covered in this description.

· 3 min read
Nikolay Samokhvalov

In 2023-2024, after incidents that multiple customers of PostgresAI experienced, when production nodes were down because of LWLock:LockManager contention, we studied it in synthetic environments.

At that time, we managed to reproduce the issue only on large machines – ~100 or more vCPUs.

With PG18 release, this question started to bother me again: can we experience LWLock:LockManager on smaller machines?

Denis Morozov just published results of benchmarks that successfully reproduce LWLock:LockManager contention in PG18 on 16-vCPU VMs.

As before, we took standard pgbench, with -s100 (no partitioning), and started running the standard "--select-only" workload that SELECTs random rows in "pgbench_accounts". Originally, the table has only 1 index – so at planning time, Postgres locks 2 relations with AccessShareLock; then we add one more index at each step.

· 7 min read
Nikolay Samokhvalov

After 2-003, @ninjouz asked on X:

If fast-path locks are stored separately, how do other backends actually check for locks?

The answer reveals why fast-path locking is so effective - and why PG 18's improvements matter so much in practice. // See lmgr/README, the part called "Fast Path Locking".

Remember from 2-002: when you SELECT from a table, Postgres locks not just the table but ALL its indexes with AccessShareLock during planning. All of these locks go into shared memory, protected by LWLocks. On multi-core systems doing many simple queries (think PK lookups), backends constantly fight over the same LWLock partition. Classic bottleneck.

Instead of always going to shared memory, each backend gets its own private array to store a limited number of "weak" locks (AccessShareLock, RowShareLock, RowExclusiveLock).

· 3 min read
Nikolay Samokhvalov

As we discussed, Lock Manager manages heavyweight locks – various kinds of them (various modes, various levels of granularity). These locks are released only at the end of the transaction.

In the most trivial case, when you run a SELECT on a table, this table is locked with AccessShareLock. And not only the table, but all its indexes, which happens during planning time (always happens unless you use prepared statements). This is to protect against concurrent DROP. All of these are released only when the transaction ends.

Information about pending and successful heavyweight lock acquisition attempts is stored in shared memory and can be displayed in the "pg_locks" view. Operations on this table are also protected by locks – but this time, lightweight locks. They are short-lived, being quickly released.

· 7 min read
Nikolay Samokhvalov

Let's talk about relation-level locks and various confusions, surprises and what is worth to remember in practice.

The key page in Postgres docs describing relation-level locks is here: https://www.postgresql.org/docs/current/explicit-locking.html#LOCKING-TABLES

This page in the docs is called "13.3. Explicit Locking" and it might cause confusion because it talks about implicit locking (e.g., if you run a DML or DDL, locks are applied implicitly; while if you execute LOCK or SELECT .. FOR UPDATE, you explicitly request locks to be acquired). However, this might be just my own terminology bias.

This page has a useful "Table 13.2. Conflicting Lock Modes" that can help understand how a lock acquisition can be blocked by another, already acquired or pending (!) lock:

Table 13.2: Conflicting lock modes in Postgres

A possible confusion here is the word "row" used in some lock modes – we shouldn't think that those modes are row-level. They are still relation-level. There is a special concept of row-level locks, we'll dive into that separately. This confusion is covered in the docs though:

Remember that all of these lock modes are table-level locks, even if the name contains the word "row"; the names of the lock modes are historical.

Back to the table above, there is a very useful transformation of it in old blog post by Marco Slot, "PostgreSQL rocks, except when it blocks: Understanding locks" (2018, https://citusdata.com/blog/2018/02/15/when-postgresql-blocks/) -- it's incomplete, but speaks common operations and can be used as quick reference:

Lock conflicts for common Postgres operations

Next, another terminology confusion might come from reading the section, "13.3.1. Table-Level Locks". Let's not be confused, it talks not only about tables, but rather relations.

In Postgres, the word "relation" can be applied to many objects, we can see it looking at the docs for pg_class.relkind (https://postgresql.org/docs/current/catalog-pg-class.html):

r = ordinary table, i = index, S = sequence, t = TOAST table, v = view, m = materialized view, c = composite type, f = foreign table, p = partitioned table, I = partitioned index

· 2 min read
Nikolay Samokhvalov

To warm up, let's talk about lightweight and heavyweight locks (or "regular locks" or just "locks").

I'm using these materials:

· 5 min read
Nikolay Samokhvalov

I'm excited to announce that Postgres AI has started work on a new project – open-source Self-Driving Postgres (SDP).

In the AI era, Postgres is the natural choice for AI builders. With fast-growing database clusters, the highest level of automation is essential. AI-driven growth demands efficient, proactive, and intelligent database management. Our goal is to reduce manual interventions as much as possible to achieve the highest level of operational efficiency and reliability.

· 7 min read
Bogdan Tsechoev

Preview environments are temporary deployment environments created for each pull request, offering major advantages over traditional shared staging environments. While platforms like Vercel (paid) and Coolify (open-source) solve application deployment, the database remains the bottleneck. Teams typically compromise: sharing one database (causing conflicts), deploying small test databases (lacking realistic data), or cloning large production databases (taking hours and costing heavily). DBLab 4.0's database branching solves this with O(1) economics, spinning up isolated production-scale Postgres clones in seconds for cost-effective full-stack previews.

· 5 min read
Bogdan Tsechoev
Nikolay Samokhvalov

This is Day 3 of Postgres AI launch week

It starts innocently enough.

You choose Postgres – solid, reliable, battle-tested. You pick a managed service like RDS or CloudSQL. They handle backups, high availability, disaster recovery. You can focus on building your product. Life is good.

Your startup grows. Users love what you've built. Data accumulates – gigabytes become terabytes. Traffic surges – hundreds of requests become thousands per second.

Then one day, everything changes.

Queries that ran in milliseconds now take seconds. Connection pools max out during peak hours. Replication lag appears out of nowhere. Your perfectly fine database suddenly isn't fine at all.

You reach out to your managed service support. The response? A generic checklist. "Have you tried increasing your instance size?" Days pass. Your users complain. Your team scrambles. The support ticket remains open, unhelpful.

This is when you realize: nobody cares about your database health as much as you do.

This is exactly when it's time to engage Postgres AI.

For 5+ years, the Postgres AI team has been rescuing companies from exactly this situation. Our clients include GitLab, Miro, Chewy, Midjourney, ClickUp, Photoroom, Gamma, Suno, Supabase – they faced a lot of database challenges during hypergrowth.

Our approach: comprehensive health checks using a methodology refined over hundreds of engagements. We call it Postgres AI Checkup – it identifies current issues and predicts future ones before they cripple your business.

Last year alone, we conducted 30+ deep checkups for 20+ companies. Today, we're launching this as a scalable service, enhanced with AI automation while keeping human experts at the core.

Start your first checkup: Console.Postgres.ai