DBLab integration
DBLab Engine enables testing Postgres copilot's recommended fixes on full clones of your production database before applying them. This guide explains how to set up and use DBLab with Postgres copilot.
Why use DBLab with Postgres copilot?
When Postgres copilot recommends a fix like CREATE INDEX ..., you want to know:
- How long will it take on production data?
- Will it block other operations?
- Does it actually improve query performance?
DBLab answers these questions by letting you test on a thin clone — a full copy of your production database created in seconds.
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ Production │────▶│ DBLab │────▶│ Clone for │
│ database │ │ Engine │ │ testing │
└─────────────┘ └─────────────┘ └─────────────┘
│
▼
┌─────────────┐
│ Postgres │
│ copilot │
└─────────────┘
Prerequisites
- DBLab Engine installed and synced with your production database
- Postgres copilot configured with monitoring
Connecting DBLab to Postgres copilot
Step 1: Get your DBLab API token
In the PostgresAI Console:
- Navigate to your organization
- Go to DBLab Engines
- Select your DBLab instance
- Copy the API token
Step 2: Configure Postgres copilot to use DBLab
# Configure DBLab endpoint
./postgres_ai config set dblab.endpoint https://dblab.your-domain.com:2345
# Set API token
./postgres_ai config set dblab.token your_dblab_token
Step 3: Verify connection
./postgres_ai dblab status
You should see:
DBLab Engine: Connected
Version: 4.0.0
Data state: 2024-12-01T10:30:00Z (2 hours ago)
Available clones: 3/10
Testing fixes on clones
Automatic testing
When DBLab is connected, Postgres copilot automatically tests certain recommendations:
- Index creation: Measures time and checks for blocking
- Configuration changes: Verifies setting validity
- Query rewrites: Compares execution plans
Test results appear in the Issue:
TESTED ON DBLAB:
Clone: clone-abc123
Data state: 2024-12-01T10:30:00Z
Index creation time: 4m 23s
Blocking operations: None detected
Query improvement: 2.3s → 5ms (99.8% faster)
Manual testing
You can also create clones for manual testing:
# Create a clone for testing
./postgres_ai dblab clone create --name "test-issue-42"
# Get connection details
./postgres_ai dblab clone info test-issue-42
Connect and test:
psql postgresql://user:pass@dblab-host:6000/mydb
# Run the recommended fix
CREATE INDEX CONCURRENTLY idx_orders_customer_id ON orders(customer_id);
# Verify improvement
EXPLAIN ANALYZE SELECT * FROM orders WHERE customer_id = 123;
Clean up when done:
./postgres_ai dblab clone destroy test-issue-42
Clone lifecycle for Issues
When Postgres copilot tests an Issue:
- Clone created: Fresh clone from latest snapshot
- Fix applied: Recommended change executed
- Verification: Query performance tested
- Results recorded: Timing and metrics saved to Issue
- Clone destroyed: Resources freed
Configuration options
Testing thresholds
# Maximum time to wait for clone creation
./postgres_ai config set dblab.clone_timeout 300s
# Maximum clone lifetime for testing
./postgres_ai config set dblab.clone_max_age 1h
Automatic testing scope
Control which recommendations are automatically tested:
# Test all index recommendations
./postgres_ai config set testing.indexes true
# Skip testing config changes
./postgres_ai config set testing.config false
Viewing test results
In the Console
Test results appear on each Issue:
- Go to Issues
- Select an Issue
- Scroll to "Testing" section
Via API
./postgres_ai issues view <issue-id> --include-tests
Best practices
- Keep DBLab synced: More recent data means more accurate tests
- Use realistic snapshots: Schedule sync during representative load periods
- Monitor clone usage: Don't exhaust clone capacity during testing
- Review timing results: Production may vary from clone timing
Troubleshooting
Clone creation fails
# Check DBLab status
./postgres_ai dblab status
# View DBLab logs
./postgres_ai dblab logs
Common causes:
- Disk space exhausted
- Maximum clones reached
- Network connectivity issues
Test results differ from production
Clone testing provides estimates. Differences may occur due to:
- Different hardware resources
- Cache state variations
- Concurrent workload differences