Skip to main content

· 3 min read

Database Lab Engine for AWS Marketplace. Fast, fixed-cost branching for your RDS Postgres is just a step awa

I'm very pleased to announce the very first preview version of Database Lab Engine (DLE) for AWS Marketplace. If you're using AWS, this is the fastest way to have powerful database branching for any database, including RDS and RDS Aurora. But not only RDS: any Postgres and Postgres-compatible database is supported as a source for DLE.

Now, for a fixed price (paying just for one EC2 instance and an EBS volume), you can have dozens of DB clones being provisioned in seconds and delivering independent databases for your Git branches, CI/CD pipelines, as well as manual optimization and testing activities.

Achieving the lowest entry barrier for the new DLE users remains to be one of our primary goals. In addition to the tutorials (we have several, including one for the RDS users) and Terraform module template for DLE, we now offer a way to install DLE on AWS only using a web browser.

What's included:

  • DLE (same as Community Edition; the latest minor versions of DLE are available)
  • Automated data provisioning at the logical level from live Postgres database (can be any Postgres, version 9.6 or newer)
  • Automated refresh using two or more snapshots, using one EBS volume (of a bigger size), with configurable full refresh schedule (read about DLE's automated full refresh on schedule in the docs; note in AWS Marketplace version, we use a single ZFS pool but 2 (or more, if you request it) datasets to enable work with multiple full snapshots of the database – therefore, during a full refresh, users can still work with multiple clones created for the snapshot that is not being refreshed at the moment.)
  • Full refresh schedule can be defined when DLE instance is created
  • UI, CLI, and API to work with any number of thin clones – as usual – Optional generation of certificates and access to UI and API via HTTPS (via Envoy proxy), as well as access to the Postgres clones created by DLE users
  • Guaranteed support via one of the available channels

What's not yet included in this "preview" version:

  • Physical mode (for those who manage Postgres themselves)
  • Many advanced DLE configuration options are not available in AWS Marketplace / CloudFormation interface; however, they can still be adjusted once the instance is created (most of them can be changed without DLE restart – see the docs)

To start, please read the documentation: "How to install DLE from the AWS Marketplace". Below you can watch a 3-minute video demonstrating the setup process.

If you have any questions, use this page to contact us: https://postgres.ai/contact.

Share this blog post:

Nikolay Samokhvalov
Nikolay Samokhvalov

CEO & Founder of Postgres.ai

Working on tools to balance Dev with Ops in DevOps

Database Lab
Database Lab by Postgres.ai

An open-source experimentation platform for PostgreSQL databases. Instantly create full-size clones of your production database and use them to test your database migrations, optimize SQL, or deploy full-size staging apps.

· 26 min read

Stepping on a 'rake db:migrate'

'rake db:migrate' – a command known to any Ruby developer. But how many times did we step on the same rake?

In his article "Lesser Known PostgreSQL Features", @be_haki describes 18 Postgres features many people don't know. I enjoyed that article, and it inspired me to write about "anti-features" – things that everyone should avoid when working in probably the riskiest field of application development – so-called "schema migrations".

This is one of my favorite topics in the field of relational databases. We all remember how MongoDB entered the stage with two clear messages: "web-scale" (let's have sharding out-of-the-box) and "schemaless" (let's avoid designing schemas and allow full flexibility). In my opinion, both buzzwords are an oversimplification, but if you have experience in reviewing and deploying schema changes in relational databases, you probably understand the level of difficulty, risks, and pain of scaling the process of making schema changes. My personal score: 1000+ migrations designed/reviewed/deployed during 17+ years of using Postgres in my own companies and when consulting others such as GitLab, Chewy, Miro. Here I'm going to share what I've learned, describing some mistakes I've made or observed – so probably next time you'll avoid them.

Moreover, a strong desire to help people avoid such mistakes led me to invent the Database Lab Engine – a technology for thin cloning of databases, essential for development and testing. With it, you can clone a 10 TiB database in 10 seconds, test schema changes, and understand the risks before deployment. Most cases discussed in this article can be easily detected by such testing, and it can be done automatically in CI/CD pipelines.

As usual, I'll be focusing on OLTP use cases (mobile and web apps), for which query execution that exceeds 1 second is normally considered too slow. Some cases discussed here are hard to notice in small databases with low activity. But I'm pretty confident that you'll encounter most of them when your database grows to ~10 TiB in size and its load reaches ~105–106 transactions per second (of course, some cases will be seen – unless deliberately prevented. – much, much earlier).

I advise you to read GitLab's great documentation – their Migration Style Guide is full of wisdom written by those who have experience in deploying numerous Postgres schema changes in a fully automated fashion to a huge number of instances, including GitLab.com itself.

I also encourage everyone to watch PGCon-2022 – one of the key Postgres conferences; this time, it's happening online again. On Thursday, May 26, I'll give two talks, and one of them is called "Common DB schema change mistakes", you find the slide deck here. If you missed it, no worries – @DLangille, who has organized the conference since 2006 (thank you, Dan!), promises to publish talk videos in a few weeks.

· 3 min read

Database Lab Engine 3.1 by Postgres.ai: pgBackRest, timezones for CLI, DLE community, more

The Postgres.ai team is happy to announce the release of version 3.1 of Database Lab Engine (DLE), the most advanced open-source software ever released that empowers development, testing, and troubleshooting environments for fast-growing projects. The use of Database Lab Engine 3.1 provides a competitive advantage to companies via implementing the "Shift-left testing" approach in software development.

Database Lab Engine is an open-source technology that enables thin cloning for PostgreSQL. Thin clones are exceptionally useful when you need to scale the development process. DLE can manage dozens of independent clones of your database on a single machine, so each engineer or automation process works with their own database provisioned in seconds without extra costs.

In this release, the development team has also focused on the Database Lab Engine community, making it easier to get help or contribute. The team greets all new contributors: @Nikolay Devxx, @asotolongo, @Tanya301, @denis-boost, @pietervincken, @ane4ka

caution

Action required to migrate from a previous version. If you are running DLE 3.0 or older, to upgrade to DLE 3.1, please read the Migration notes.

In DLE 3.1:

  • Native support for pgBackRest as a tool to restore data from archives (physical mode, including continuously updated state), in addition to the existing support of WAL-G
  • Allow configuring timezone in DLE CLI configuration to improve the experience of using DLE in CI/CD pipelines
  • Improved README.md, translated to four languages, added CONTRIBUTING.md, SECURITY.md, and CODE_OF_CONDUCT.md
  • Many improvements in the engine and UI to improve work both in logical and physical modes

Community news:

  • 🌠 DLE repository on GitHub now has 1,100+ stars; many thanks to everyone who supports the project in any way
  • 💥 Pieter Vincken has published a blog post describing their experience of using DLE: "Testing with production data made easy"
  • 📈 The Twitter account has reached 400 followers – please follow @Database_Lab
  • 🎉 DLE now has 15 contributors. More contributions are welcome! See "good first issues"
  • 🥇 Please consider various ways to contribute – read CONTRIBUTING.md

Further reading#

tip

To get help, reach out to the Postgres.ai team and the growing community of Database Lab users and contributors: https://postgres.ai/contact.

Request for feedback and contributions#

Feedback and contributions would be greatly appreciated:

Like Database Lab? Give us a GitHub star: https://github.com/postgres-ai/database-lab.

Share this blog post:

Nikolay Samokhvalov
Nikolay Samokhvalov

CEO & Founder of Postgres.ai

Working on tools to balance Dev with Ops in DevOps

Database Lab
Database Lab by Postgres.ai

An open-source experimentation platform for PostgreSQL databases. Instantly create full-size clones of your production database and use them to test your database migrations, optimize SQL, or deploy full-size staging apps.

· 12 min read

Progress bar for Postgres queries – let's dive deeper

Recently, I have read a nice post titled "Query Progress Bar", by Brian Davis. It describes an interesting approach to observing the progress of slow query execution.

At some point, the author mentions:

Don't use this in prod.

And I agree. The article discusses long-running queries such as SELECTs, UPDATEs, DELETEs, and quite "invasive" methods of progress monitoring. In an OLTP production scenario, in most cases, we should try to limit the duration of such queries, setting statement_timeout to a very low value – such as 30 or even 15 seconds.

Let's dive deeper into the topic of query progress monitoring, and discuss various types of queries, how to monitor their progress, considering production and non-production environments separately.

· 22 min read

Jupiter's moon Io. Credit: ALMA (ESO/NAOJ/NRAO), I. de Pater et al.; NRAO/AUI NSF, S. Dagnello; NASA/JPL/Space Science InstituteJupiter's moon Io. Credit: ALMA (ESO/NAOJ/NRAO), I. de Pater et al.; NRAO/AUI NSF, S. Dagnello; NASA/JPL/Space Science Institute

SQL query optimization is challenging for those who have just started working with PostgreSQL. There are many objective reasons for this, such as:

  • the difficulty of the field of system performance in general,
  • lack of good "playground" environments where people can experience how databases work at a larger scale,
  • lack of certain capabilities in Postgres observability tools that are still developing (though, at a good pace),
  • insufficiency of good educational materials.

All these barriers are reasonable. They limit the number of engineers possessing well-developed Postgres query optimization skills. However, there is a specific artificial barrier that is rather influential and which is relatively easy to eliminate.

Here it is: the EXPLAIN command has the BUFFERS option disabled by default. I am sure it has to be enabled and used by everyone who needs to do some SQL optimization work.

· 7 min read

Database Lab Engine 3.0 by Postgres.ai: UI, persistent clones, PostgreSQL 14, more

note

Action required to migrate from a previous version. If you are running DLE 2.5 or older, please read carefully and follow the Migration notes.

What's new in DLE 3.0?

The Postgres.ai team is happy to announce the release of version 3.0 of Database Lab Engine (DLE), the most advanced open-source software ever released that empowers development, testing, and troubleshooting environments for fast-growing projects. The use of Database Lab Engine 3.0 provides a competitive advantage to companies via implementing the "Shift-left testing" approach in software development.

Database Lab Engine is an open-source technology that enables thin cloning for PostgreSQL. Thin clones are exceptionally useful when you need to scale the development process. DLE can manage dozens of independent clones of your database on a single machine, so each engineer or automation process works with their very own database provisioned in seconds without extra costs.

Among major changes in DLE 3.0:

  • UI included to the core, it allows working with a single DLE instance,
  • persistent clones: clones now survive DLE (or VM) restart,
  • for the "logical" data provisioning mode: the ability to switch reset clone's state using a snapshot from different pool/dataset,
  • better logging and configuration simplicity,
  • improvements for the cases when multiple DLEs are running on a single machine,
  • PostgreSQL 14 support.

Starting with version 3.0.0, DLE collects non-personally identifiable telemetry data. This feature is enabled by default but can be switched off. Read more in the DLE documentation. Keeping telemetry enabled can be considered your contribution to the DLE development because it helps make decisions down the road of the open-source product development.

Further, we discuss the most requested changes that were implemented in DLE 3.0 – all of them were created based on real-life user experience and invaluable feedback from the growing community of users and contributors.

· 8 min read

EXISTS OR NOT EXISTS (meme)

What is this about?#

Many DDL statements in PostgreSQL support modifiers IF EXISTS / IF NOT EXISTS. For example:

test=# create table if not exists mytable();CREATE TABLEtest=# drop table if exists mytable;DROP TABLE

I recommend using IF EXISTS / IF NOT EXISTS in DDL only when necessary. Here are three examples that demonstrate how the overuse of these words may lead to negative consequences.

· 17 min read

How partial, covering, and multicolumn indexes may slow down UPDATEs in PostgreSQL

Based on a true story.

This article was originally published in 2018. This is a reviewed and extended version of it. The discussed findings can be applied to any actual major version of PostgreSQL.

Primum non nocere#

"Primum non nocere" – this is a fundamental principle that is well-known to anyone working in healthcare: "first, do no harm". It is a reminder: when considering any action that is supposed to improve something, we always need to look at the global picture to see if there might be something else that be damaged by the same action.

This is a great principle and it is used not only in healthcare, of course. I strongly believe that it has to be used in database optimization too, and we need better tools to make it happen.

· 2 min read

The Database Lab Platform now supports the ability to reset a clone's state using any available data snapshot on the Database Lab Engine (DLE) instance. It means that now you can get fresh data very quickly (a few seconds, as usual) to your clone, keeping all the credentials the same – including the port. Previously, it was a problem that one would need to create a new clone to get new data. This implies getting database credentials, most likely including a new port value, which was not always convenient.

It might be helpful in the cases when long-living clones need to be refreshed to get the most up-to-date data from the source (usually, production) or when troubleshooting a non-trivial issue requiring the use of a specific database version from the past (specific point in time).

Reset options demo

· 6 min read

Example output of the query for 'lock trees' analysis

For OLTP workloads (such as web and mobile applications), it is important to understand object-level and row-level locks in PostgreSQL. There are several good materials that I can recommend reading: