plan-exporter: visualize PostgreSQL EXPLAIN data right from psql

If you love and use psql (like I do), you're equipped with a lot of power. However, when you want to visualize execution plans — using such services as good old explain.depesz.com or modern explain.dalibo.com — you need to deal with inconvenient copy-pasting.

To solve this problem, my colleague Artyom Kartasov has developed a small utility called plan-exporter. It allows sending EXPLAIN data with minimal efforts:

To enable plan-exporter you need to use \o with a pipe:

\o | plan-exporter

After this, psql will start mirroring the output to plan-exporter. When plan-exporter sees the EXPLAIN data, it suggests you sending it to a visualization service.

Both services mentioned above are supported and can be chosen using --target option. The default is explain.depesz.com.

To reset, just use \o command without parameters – and plan-exporter will stop receiving the data. And if you want to always have it enabled when you start psql, consider adjusting your .psqlrc file:

echo '\o | plan-exporter --target=dalibo' >> ~/.psqlrc

Joe 0.7.0 released! New in this release: Web UI, Channel Mapping, and new commands

In addition to Slack integration, Joe Bot can be now integrated with Postgres.ai Platform, providing convenient Web UI for all developers who want to troubleshoot and optimize SQL efficiently. Secure and performant Web UI works in any modern browser (even mobile!) and brings more flexibility, 1:1 communication, and visualization options.

What's new in version 0.7.0#

  • [EE] Support Web UI integration with Postgres.ai Platform (see our updated Joe Bot Tutorial to integrate)
  • Extendable communication types: implement support for your favorite messenger
  • Channel Mapping: plug-in as many databases as you want in one Database Lab instance
  • [EE] Support multiple Database Lab instances in parallel
  • New commands to monitor current activity and terminate long-lasting queries
  • Flexible Bot configuration: various convenient options are available in one place
  • Permalinks: when integrated with Postgres.ai Platform, Joe responses contain links to a detailed analysis of SQL execution plans, with three visualization options (FlameGraphs, PEV2 by Dalibo, and good old "explain.depesz.com", all embedded to the Platform)

The full list of changes can be found in Changelog. Can't wait to try!

Web UI communication type#

Originally, only the Slack version of Joe Bot was publicly available. Today, we are excited to announce that there are two available types of communication with Joe:

The good news is that you can use both of them in parallel.

Thanks to recent refactoring of Joe codebase, and the fact that this codebase is open-source, you can develop and add support for any messenger. Feel free to open issues to discuss the implementation and merge requests to include the code into the main Joe Bot repository. See also: communication channels issues, and discussions in our Community Slack.

Check Platform Overview to discover all advantages of using Web UI working on Postgres.ai Platform.

TODO Fix image ?Postgres.ai Console

Joe Bot Tutorial was adjusted and now explains setting up both Slack and Web UI versions: https://postgres.ai/docs/tutorials/joe-setup.

🚀 Note that currently, Postgres.ai Platform is working in "Closed Beta" mode. During this period, we activate accounts on Postgres.ai only after a 30-minute video call with a demo session and screen sharing. Feel free to join https://postgres.ai using our Google/GitLab/GitHub/LinkedIn account but allow some time while we process your registration and reach you to organize a demo session.

Channel Mapping#

Often the infrastructure doesn't limit by a single database. In addition, we want to work with different kinds of communication types. Someone is comfortable with Slack, whereas someone prefers Web UI.

Does it mean that we have to run multiple Joe instances? Starting with version 0.7.0 the answer is no.

Thanks to Channel Mapping you can easily use multiple databases and communication types. Moreover, you can configure multiple Database Lab instances in Enterprise Edition.

Check all configuration options in the docs to realize how the channel mapping can be implemented.

New commands: activity and terminate#

Imagine, we have a PostgreSQL database \d+:

List of relations
Schema | Name | Type | Owner | Size | Description
--------+------------------+-------+----------+---------+-------------
public | pgbench_accounts | table | postgres | 171 GB |
public | pgbench_branches | table | postgres | 624 kB |
public | pgbench_history | table | postgres | 512 kB |
public | pgbench_tellers | table | postgres | 6616 kB |
(4 rows)

We are running a query and realizing that it will take for a long time:

explain select from pgbench_accounts where bid = 100;
Plan without execution:
Gather (cost=1000.00..29605361.74 rows=118321 width=0)
Workers Planned: 2
-> Parallel Seq Scan on pgbench_accounts (cost=0.00..29592529.64 rows=49300 width=0)
Filter: (bid = 100)
JIT:
Functions: 3
Options: Inlining true, Optimization true, Expressions true, Deforming true

What can we do if we don't want to waste our time? Version 0.7.0 adds new commands to control running queries.

The activity command shows currently running sessions in Postgres for following states: active, idle in transaction, disabled. We can easily discover the current clone activity using this command:

Activity response:
PID | QUERY | STATE | BACKEND TYPE | WAIT EVENT | WAIT EVENT TYPE | QUERY DURATION | STATE CHANGED AGO
------+--------------------------------+--------+-----------------+--------------+-----------------+-----------------+--------------------
20 | EXPLAIN (ANALYZE, COSTS, | active | client backend | DataFileRead | IO | 00:10:06.738739 | 00:10:06.738783
| VERBOSE, BUFFERS, FORMAT JSON) | | | | | |
| select from pgbench_accounts | | | | | |
| where bid = 100... | | | | | |
29 | EXPLAIN (ANALYZE, COSTS, | active | parallel worker | DataFileRead | IO | 00:10:06.738798 | 00:10:06.698513
| VERBOSE, BUFFERS, FORMAT JSON) | | | | | |
| select from pgbench_accounts | | | | | |
| where bid = 100... | | | | | |
28 | EXPLAIN (ANALYZE, COSTS, | active | parallel worker | DataFileRead | IO | 00:10:06.738807 | 00:10:06.705874
| VERBOSE, BUFFERS, FORMAT JSON) | | | | | |
| select from pgbench_accounts | | | | | |
| where bid = 100... | | |

It shows we have a long-lasting query. So, in case if we don't want to wait when it finishes, a new terminate command will help us with this. The command terminates Postgres backend that has the specified PID. As we can see above, the client backend process has PID 20. Therefore, let's run terminate 20:

Terminate response:
PG TERMINATE BACKEND
------------------------
true

Success. Then repeat the command activity:

Activity response:
No results.

Also, we can notice that the previously running command has been stopped:

explain select from pgbench_accounts where bid = 100;
ERROR: FATAL: terminating connection due to administrator command (SQLSTATE 57P01)

That's a very powerful tool!

See the full list of Joe's commands in the docs.

Links:#

Joe 0.6.0 supports hypothetical indexes

Joe's new command hypo to further boost development processes#

Building indexes for large tables may take a long time. The new release of Joe bot includes the ability to get a sneak peek of the SQL query plan, using hypothetical indexes, before proceeding to actually building large indexes.

A hypothetical index is an index that doesn't exist on disk. Therefore it doesn't cost IO, CPU, or any resource to create. It means that such indexes are created almost instantly.

With the brand new command, hypo, you can create hypothetical indexes with Joe and ensure that PostgreSQL would use them. Once it's done, you can use exec to build the actual indexes (in some cases, you'll need to wait some hours for this) and see the actual plan in action.

Note, since the command works on top of the HypoPG extension, your Database Lab image has to use a Docker image for Postgres that contains HypoPG, because this extension is not a part of the core PostgreSQL distribution. For convenience, we have prepared images with HypoPG (and some other extensions) included, for Postgres versions 9.6, 10, 11, and 12. Of course, you can always use your custom image.

To be able to see the plan without actual execution, we have added one more new command: plan. It is aware of hypothetical indexes, so if one is detected in the plan, it presents two versions of the plan, with and without HypoPG involved.

What's new in version 0.6.0#

Version 0.6.0 adds new commands to work with hypothetical indexes and get a query plan without execution, grand improvements in message processing, more. The full list of changes can be found in Changelog. Stay tuned!

Demo#

Joe demo

First, we need a running Database Lab instance that uses a Docker image with HypoPG extension. Choose a custom Docker image in Database Lab Engine configuration, specifying dockerImage in config.yml of your Database Lab instance:

...
dockerImage: "postgresai/extended-postgres:12"
...

Let's see how to use hypothetical indexes with Joe. Generate a synthetic database using standard PostgreSQL tool called pgbench:

$ pgbench -i -s 10000 test

Check the size of tables \d+:

List of relations
Schema | Name | Type | Owner | Size | Description
--------+------------------+-------+----------+---------+-------------
public | pgbench_accounts | table | postgres | 171 GB |
public | pgbench_branches | table | postgres | 520 kB |
public | pgbench_history | table | postgres | 0 bytes |
public | pgbench_tellers | table | postgres | 5960 kB |
(4 rows)

Then, get a query plan that should benefit an index that’s not here:

explain select * from pgbench_accounts where bid = 1;

The result is:

Plan with execution:
Gather (cost=1000.00..29605106.00 rows=118320 width=97) (actual time=770.623..3673842.642 rows=100000 loops=1)
Workers Planned: 2
Workers Launched: 2
Buffers: shared hit=64 read=22457314
-> Parallel Seq Scan on public.pgbench_accounts (cost=0.00..29592274.00 rows=49300 width=97) (actual time=748.869..3673654.643 rows=33333 loops=3)
Filter: (pgbench_accounts.b
Recommendations:
SeqScan is used – Consider adding an index Show details
Query processes too much data to return a relatively small number of rows. – Reduce data cardinality as early as possible during the execution, using one or several of the following techniques: new indexes, partitioning, query rewriting, denormalization. See the visualization of the plan to understand which plan nodes are the main bottlenecks. Show details
Add LIMIT – The number of rows in the result set is too big. Limit number of rows. Show details
Summary:
Time: 61.231 min
- planning: 0.079 ms
- execution: 61.231 min
- I/O read: 0.000 ms
- I/O write: 0.000 ms
Shared buffers:
- hits: 64 (~512.00 KiB) from the buffer pool
- reads: 22457314 (~171.30 GiB) from the OS file cache, including disk I/O
- dirtied: 0
- writes: 0

This query takes an enormously long time. The recommendations suggest adding an index. Before building a real index, let's verify our index idea with instant creation of the corresponding hypothetical index, simply using hypo create index on pgbench_accounts (bid):

HypoPG response:
INDEXRELID | INDEXNAME
-------------+------------------------------------
24588 | <24588>btree_pgbench_accounts_bid

Check that index has been created hypo desc:

HypoPG response:
INDEXRELID | INDEXNAME | NSPNAME | RELNAME | AMNAME
-------------+-----------------------------------+---------+------------------+---------
24588 | <24588>btree_pgbench_accounts_bid | public | pgbench_accounts | btree

Get more details about the index such as estimated size and index definition hypo desc 24588:

HypoPG response:
INDEXRELID | INDEXNAME | HYPOPG GET INDEXDEF | PG SIZE PRETTY
-------------+-----------------------------------+--------------------------------+-----------------
24588 | <24588>btree_pgbench_accounts_bid | CREATE INDEX ON | 1366 MB
| | public.pgbench_accounts USING |
| | btree (bid) |

With the consideration that it may be annoying and not really useful to wait seconds (or even minutes) for actual execution when we deal with hypothetical index checks - so let's use the plan command plan select * from pgbench_accounts where bid = 1; and save even more time:

Joe's response will be:

Plan (HypoPG involved 👻):
Index Scan using <24588>btree_pgbench_accounts_bid on pgbench_accounts (cost=0.08..5525.68 rows=118320 width=97)
Index Cond: (bid = 1)
Plan without HypoPG indexes:
Gather (cost=1000.00..29605106.00 rows=118320 width=97)
Workers Planned: 2
-> Parallel Seq Scan on pgbench_accounts (cost=0.00..29592274.00 rows=49300 width=97)
Filter: (bid = 1)
JIT:
Functions: 2
Options: Inlining true, Optimization true, Expressions true, Deforming true

Perfect! The index works! It means we can reset the hypothetical index hypo reset and create the real one exec create index pgbench_accounts_bid on pgbench_accounts (bid);:

exec create index pgbench_accounts_bid on pgbench_accounts (bid);
Session: joe-bps8quk2n8ejes08vnhg
The query has been executed. Duration: 126.975 min

It's obvious that hypo and plan extremely save developers' time!

See the full list of Joe's commands in the docs: https://postgres.ai/docs/joe-bot/commands-reference.

Links:#