MySQL to PostgreSQL migration assistant
Convert your MySQL schemas, queries, and stored procedures to PostgreSQL with AI-powered analysis
For people who are planning or actively executing a migration from MySQL (or MariaDB) to PostgreSQL and need help with data type mappings, syntax differences, stored procedure conversion, tool selection, and migration planning.
About this tool
Migrating from MySQL to PostgreSQL is one of the most common database migrations, but the two systems differ in far more ways than most teams expect. Beyond the obvious syntax differences — backticks vs. double quotes, LIMIT placement, AUTO_INCREMENT vs. SERIAL — there are fundamental differences in type systems, transaction semantics, stored procedure languages, character set handling, and default behaviors that can introduce subtle bugs if not handled carefully.
This tool provides expert-level guidance for every phase of a MySQL-to-PostgreSQL migration. It analyzes your MySQL schemas, queries, and stored procedures, then produces PostgreSQL-compatible equivalents with detailed explanations of each change. It covers the full spectrum of conversion challenges: data type mappings (TINYINT(1) to BOOLEAN, DATETIME to TIMESTAMPTZ, ENUM to PostgreSQL ENUM types or check constraints, MEDIUMTEXT/LONGTEXT to TEXT), identifier quoting rules, auto-increment conversion to GENERATED ALWAYS AS IDENTITY (preferred in modern PostgreSQL) or SERIAL, and MySQL-specific functions to their PostgreSQL equivalents.
Stored procedure and function conversion is where many migrations stall. MySQL uses a SQL/PSM-derived procedural language, while PostgreSQL uses PL/pgSQL — similar in concept but different in syntax, variable declaration, error handling, cursor behavior, and control flow. This tool translates MySQL stored procedures to idiomatic PL/pgSQL, handling DECLARE blocks, HANDLER to EXCEPTION conversion, DELIMITER removal, and MySQL-specific constructs like LEAVE and ITERATE labels.
The tool also advises on migration tooling. pgloader is often the fastest path for straightforward schemas — it handles schema conversion, data loading, and index creation in a single pass. For more complex scenarios, pg_chameleon provides real-time replication from MySQL to PostgreSQL, allowing cutover with minimal downtime. AWS Database Migration Service (DMS) is an option for cloud-hosted databases. The tool helps you choose the right approach based on your database size, downtime tolerance, and complexity.
Character set and collation handling deserves special attention. MySQL databases often mix latin1, utf8 (which is actually 3-byte UTF-8, not full Unicode), and utf8mb4 across tables and columns. PostgreSQL uses a single encoding per database (almost always UTF-8) with ICU or libc collations. The tool identifies potential encoding issues before they become data corruption problems during migration.
Trigger conversion is another area where the differences run deep. MySQL triggers are defined inline with SQL statements, while PostgreSQL requires a separate trigger function written in PL/pgSQL (or another procedural language) that the trigger then invokes. MySQLs ON UPDATE CURRENT_TIMESTAMP clause — commonly used for updated_at` columns — has no PostgreSQL equivalent and must be implemented as a trigger function. The tool generates complete, production-ready trigger functions for these patterns.
Testing is critical to a successful migration. The tool recommends a validation strategy that includes row count comparisons, checksum verification for critical columns, application-level integration tests against the new PostgreSQL database, and performance benchmarking of your most important queries. It also highlights behavioral differences that automated tests might miss — MySQLs default case-insensitive string comparisons vs. PostgreSQLs case-sensitive behavior, MySQLs permissive GROUP BY (which allows selecting non-aggregated columns not in the GROUP BY clause) vs. PostgreSQLs strict SQL standard enforcement, and differences in NULL handling in unique indexes.
For best results, provide your MySQL CREATE TABLE statements, any stored procedures or triggers you need to convert, and the queries your application runs. The more context you provide, the more precise the conversion will be.
- Migration involves DDL operations — always test on a staging environment before applying to production.
- Never drop or modify MySQL source tables until the PostgreSQL migration is fully validated with row counts and data checksums.
- AUTO_INCREMENT to IDENTITY/SERIAL conversion requires resetting the sequence to the current max value after data import to avoid primary key conflicts.
- Data type narrowing (e.g., UNSIGNED BIGINT to BIGINT) can cause data loss — validate value ranges before migration.
Examples
-- MySQL source schema
CREATE TABLE users (
id INT NOT NULL AUTO_INCREMENT,
username VARCHAR(255) NOT NULL,
email VARCHAR(255) NOT NULL,
status ENUM('active', 'inactive', 'suspended') DEFAULT 'active',
is_verified TINYINT(1) DEFAULT 0,
profile_text MEDIUMTEXT,
login_count INT UNSIGNED DEFAULT 0,
last_login DATETIME,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
PRIMARY KEY (id),
UNIQUE KEY idx_email (email),
KEY idx_status_verified (status, is_verified)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
A typical MySQL table using AUTO_INCREMENT, ENUM, TINYINT(1) as boolean, unsigned integers, MEDIUMTEXT, DATETIME without timezone, and ON UPDATE CURRENT_TIMESTAMP. The AI will convert each to the correct PostgreSQL equivalent: GENERATED ALWAYS AS IDENTITY, a PostgreSQL ENUM type, BOOLEAN, regular INTEGER with a CHECK constraint, TEXT, TIMESTAMPTZ, and a trigger function for updated_at.
-- MySQL stored procedure
DELIMITER //
CREATE PROCEDURE transfer_funds(
IN p_from_account INT,
IN p_to_account INT,
IN p_amount DECIMAL(15,2),
OUT p_status VARCHAR(50)
)
BEGIN
DECLARE v_balance DECIMAL(15,2);
DECLARE EXIT HANDLER FOR SQLEXCEPTION
BEGIN
ROLLBACK;
SET p_status = 'ERROR';
END;
START TRANSACTION;
SELECT balance INTO v_balance
FROM accounts
WHERE account_id = p_from_account
FOR UPDATE;
IF v_balance >= p_amount THEN
UPDATE accounts SET balance = balance - p_amount
WHERE account_id = p_from_account;
UPDATE accounts SET balance = balance + p_amount
WHERE account_id = p_to_account;
INSERT INTO transfers (from_account, to_account, amount, transfer_date)
VALUES (p_from_account, p_to_account, p_amount, NOW());
SET p_status = 'SUCCESS';
COMMIT;
ELSE
SET p_status = 'INSUFFICIENT_FUNDS';
ROLLBACK;
END IF;
END //
DELIMITER ;
A MySQL stored procedure with transaction management, error handling via DECLARE HANDLER, SELECT INTO, and OUT parameters. The AI will convert this to a PL/pgSQL function using EXCEPTION blocks, proper variable declaration syntax, and PostgreSQL transaction semantics (noting that PL/pgSQL functions run inside the caller's transaction by default).
Inputs and outputs
What you provide
- MySQL CREATE TABLE statements or mysqldump output
- MySQL stored procedures, functions, or triggers to convert
- Application SQL queries using MySQL-specific syntax
- Migration constraints (database size, downtime tolerance, cloud provider)
What you get
- PostgreSQL-compatible DDL with data type mappings explained
- PL/pgSQL function/procedure equivalents of MySQL stored procedures
- Rewritten queries with PostgreSQL syntax and function replacements
- Migration plan with tool recommendations and validation steps
Use cases
- Converting MySQL CREATE TABLE statements to PostgreSQL with correct data type mappings and constraint syntax
- Translating MySQL stored procedures and functions to PL/pgSQL with proper error handling and control flow
- Identifying MySQL-specific SQL syntax in application queries and providing PostgreSQL equivalents
- Planning a phased migration strategy with tool selection (pgloader, pg_chameleon, AWS DMS) based on requirements
- Resolving character set and collation mismatches between MySQL utf8/utf8mb4 and PostgreSQL UTF-8
- Converting MySQL triggers to PostgreSQL trigger functions with proper BEFORE/AFTER/INSTEAD OF semantics
Features
- Maps MySQL data types to optimal PostgreSQL equivalents with explanations of behavioral differences
- Converts AUTO_INCREMENT columns to GENERATED ALWAYS AS IDENTITY or SERIAL with sequence configuration
- Translates MySQL ENUM columns to PostgreSQL ENUM types or CHECK constraints, with trade-off analysis
- Rewrites MySQL-specific functions (IFNULL, GROUP_CONCAT, DATE_FORMAT, UNIX_TIMESTAMP) to PostgreSQL equivalents
- Converts stored procedures from MySQL SQL/PSM to PL/pgSQL with DECLARE/EXCEPTION block restructuring
- Identifies implicit behavior differences: case sensitivity, zero dates, silent truncation, GROUP BY handling
Frequently asked questions
What are the most common data type mapping issues when migrating from MySQL to PostgreSQL?
The most impactful type differences are: **AUTO_INCREMENT** becomes GENERATED ALWAYS AS IDENTITY (PostgreSQL 10+) or SERIAL — identity columns are preferred because they prevent manual value insertion by default, avoiding sequence desync. **TINYINT(1)** should map to BOOLEAN, not SMALLINT, since MySQL commonly uses it as a boolean flag — but verify your application does not store values other than 0 and 1. **DATETIME** maps to TIMESTAMP (without time zone) and **TIMESTAMP** maps to TIMESTAMPTZ (with time zone) — but be aware that MySQL's TIMESTAMP auto-converts to UTC on storage, while PostgreSQL's TIMESTAMPTZ stores the UTC value and converts on display based on the session timezone setting. **ENUM** types exist in both systems but behave differently — MySQL enums are stored as integers internally and allow empty-string values, while PostgreSQL enums are true types that must be created separately with CREATE TYPE. **UNSIGNED** integers do not exist in PostgreSQL; use the next larger type or add a CHECK (column >= 0) constraint. **MEDIUMTEXT**, **LONGTEXT**, and **TINYTEXT** all become plain TEXT in PostgreSQL, which has no practical size limit. **DOUBLE** becomes DOUBLE PRECISION, and **FLOAT** becomes REAL.
How do I convert MySQL stored procedures to PostgreSQL PL/pgSQL?
MySQL stored procedures use a SQL/PSM-derived language, while PostgreSQL uses PL/pgSQL. The key structural differences: MySQL uses DELIMITER to change the statement terminator — PostgreSQL does not need this because function bodies are wrapped in $$ dollar-quoted strings. MySQL DECLARE statements for variables go in the BEGIN...END block — PL/pgSQL has a separate DECLARE section before BEGIN. MySQL error handlers (DECLARE EXIT HANDLER FOR SQLEXCEPTION) become PL/pgSQL EXCEPTION blocks inside BEGIN...EXCEPTION...END. MySQL OUT and INOUT parameters work similarly in PostgreSQL, but idiomatic PostgreSQL functions often return TABLE or a composite type instead. MySQL LEAVE and ITERATE (for loop control) map to EXIT and CONTINUE in PL/pgSQL. MySQL's SELECT ... INTO syntax is the same in PL/pgSQL, but SELECT without INTO inside a PL/pgSQL function is an error — use PERFORM for queries whose results you want to discard. Transaction control (COMMIT, ROLLBACK) inside PL/pgSQL requires PROCEDURE (not FUNCTION) and PostgreSQL 11+. In earlier versions, procedures cannot manage their own transactions — the caller must handle it.
What tools should I use for migrating data from MySQL to PostgreSQL?
**pgloader** is the most popular open-source tool for MySQL-to-PostgreSQL migrations. It reads directly from a running MySQL instance, converts schemas automatically (including data types, indexes, and foreign keys), loads data using PostgreSQL COPY for speed, and creates indexes in parallel after the data load. It handles most type conversions out of the box and allows custom transformation rules via its command language. For databases under 100 GB with a maintenance window, pgloader is usually the best choice. **pg_chameleon** provides logical replication from MySQL to PostgreSQL, allowing you to keep both databases in sync during a transition period and cut over with minimal downtime — ideal for large databases or zero-downtime requirements. **AWS Database Migration Service (DMS)** works well if both databases are in AWS and you want a managed solution, though it has limitations with complex stored procedures and certain data types. For very large databases (multi-TB), consider using **mydumper** to export data in parallel, then **pgloader** or custom scripts to load it. Always run the migration against a test environment first, then validate row counts, checksums, and application behavior before cutting over production.
How do I handle MySQL-specific SQL syntax in my application queries?
Several MySQL syntax patterns need rewriting for PostgreSQL. **Identifier quoting**: MySQL uses backticks (` `), PostgreSQL uses double quotes ("), though the best practice is to use unquoted lowercase identifiers that are valid in both systems. **LIMIT with offset**: LIMIT 10, 20 in MySQL becomes LIMIT 20 OFFSET 10 in PostgreSQL — note the reversed order. **String concatenation**: MySQL's CONCAT() works in PostgreSQL, but the || operator is more idiomatic. **IFNULL**: becomes COALESCE (which is standard SQL and works in both). **GROUP_CONCAT**: becomes STRING_AGG(column, ',') in PostgreSQL, with ORDER BY supported inside the aggregate. **INSERT ... ON DUPLICATE KEY UPDATE**: becomes INSERT ... ON CONFLICT (...) DO UPDATE SET ... in PostgreSQL (9.5+). **DATE_FORMAT**: becomes TO_CHAR(timestamp, format) with different format specifiers (%Y-%m-%d becomes YYYY-MM-DD). **REPLACE INTO**: has no direct equivalent; use INSERT ... ON CONFLICT DO UPDATE instead. **SHOW TABLES**, **DESCRIBE**: use \dt in psql or query information_schema.tables and information_schema.columns. **AUTO_INCREMENT value after INSERT**: MySQL's LAST_INSERT_ID() becomes currval('sequence_name') or use INSERT ... RETURNING id`.
Related tools
Related resources
Ready to try it?
Use this tool for free — powered by PostgresAI.