Skip to content

Schema-to-Schema

The Schema-to-Schema medium handles major schema refactoring on production databases with zero downtime. It runs old and new schemas side-by-side using PostgreSQL Foreign Data Wrapper (FDW), migrates data in the background, and cuts over when ready.

Incremental migrations work for simple changes (add column, create index). But some changes are too disruptive:

  • Renaming columns on large tables
  • Changing column types that require data transformation
  • Splitting or merging tables
  • Changing primary key strategies

These operations lock tables, block writes, and can cause minutes of downtime on large tables. Schema-to-Schema avoids this entirely.

The process runs in 5 steps:

  1. Setup — Create a new database with the new schema alongside the existing one:

    Terminal window
    confiture migrate schema-to-schema setup --source prod --target prod_new

    This creates prod_new, applies your updated DDL files to it, and sets up a PostgreSQL FDW connection from prod to prod_new. Your application continues running against prod throughout.

  2. Migrate Data — Copy data from the old schema to the new one in the background:

    Terminal window
    confiture migrate schema-to-schema migrate --target prod_new

    Confiture automatically selects the best strategy per table:

    StrategyThroughputBest For
    FDW~500K rows/secTables under 10M rows, complex SQL transformations
    COPY~6M rows/secTables over 10M rows, bulk data

    Your application stays on the old schema while this runs.

  3. Verify — Check data integrity before cutting over:

    Terminal window
    confiture migrate schema-to-schema verify --target prod_new
    ✓ Row counts match
    → tb_user: 12,450 (source) = 12,450 (target)
    → tb_post: 45,230 (source) = 45,230 (target)
    ✓ Checksum verification passed
    ✓ Foreign key integrity verified
  4. Cutover — Update your application to point to the new database:

    # Update environment variable or config
    DATABASE_URL=postgresql://prod_new_host:5432/prod_new
    # Restart application (0-5s downtime)

    The cutover is a config change — the only downtime is the application restart.

  5. Cleanup — After monitoring the new database for a period:

    Terminal window
    confiture migrate schema-to-schema cleanup --target prod_new

    This removes the FDW connection and (optionally) the old database.

When renaming or transforming columns, provide a mapping file:

column_mapping.yaml
tb_user:
# Simple rename
username: login_name
# Type transformation
metadata:
source_column: settings_json
transform: "settings_json::jsonb"
# Computed column
display_name:
transform: "COALESCE(display_name, name)"
Terminal window
confiture migrate schema-to-schema migrate \
--target prod_new \
--mapping column_mapping.yaml

Renaming tb_user.username to tb_user.login_name on a table with 50M rows:

Terminal window
# 1. Update DDL files with new column name
# db/schema/01_write/tb_user.sql now uses login_name
# 2. Setup new database
confiture migrate schema-to-schema setup --source prod --target prod_new
# 3. Migrate with column mapping
confiture migrate schema-to-schema migrate \
--target prod_new \
--mapping column_mapping.yaml
# 4. Verify
confiture migrate schema-to-schema verify --target prod_new
# 5. Update app config, restart
# 6. Monitor, then cleanup
confiture migrate schema-to-schema cleanup --target prod_new