article thumbnail
DBMate
Database Migrations That Actually Behave
8 min read
#programming, #dbmate, #git

Database migrations should be boring. Write a change, version it, apply it consistently across every environment. Somehow, most tools make this harder than it needs to be. dbmate doesn't.


Origin

dbmate was created by Adrian Macneil and first released around 2016. Macneil had previously built and maintained other migration tools and wanted something that was:

The project lives at github.com/amacneil/dbmate and is actively maintained. It stores migration state in a schema_migrations table it manages itself, and it keeps a schema.sql dump after every run -- which is the detail that makes it genuinely useful in a team workflow.


Installation

macOS

brew install dbmate

Verify:

dbmate --version

Windows

Download the latest release binary from the GitHub releases page. Choose the dbmate-windows-amd64.exe asset.

Option 1 -- Manual:

  1. Rename it to dbmate.exe
  2. Drop it somewhere on your PATH (e.g., C:\tools\)
  3. Add that directory to your system PATH via System Properties → Environment Variables

Option 2 -- Scoop:

scoop install dbmate

Option 3 -- Winget:

winget install dbmate

Verify:

dbmate --version

Configuration

dbmate reads the database URL from a .env file in the project root, or from the DATABASE_URL environment variable directly.

.env (never commit this file):

DATABASE_URL="postgres://user:password@localhost:5432/myapp_dev?sslmode=disable"

For MySQL:

DATABASE_URL="mysql://user:password@localhost:3306/myapp_dev"

Add .env to .gitignore immediately:

echo ".env" >> .gitignore

Instead, commit a .env.example with placeholder values so teammates know what's needed.


Project Structure

dbmate expects (and creates) a db/ directory:

project/
├── db/
│   ├── migrations/
│   │   ├── 20240101120000_create_users.sql
│   │   └── 20240215083000_add_email_index.sql
│   └── schema.sql          ← auto-generated after every migrate
├── .env                    ← local only, gitignored
├── .env.example            ← committed, placeholder values
└── .gitignore

Starting From Scratch

Initialize the migrations directory:

dbmate new create_users

This generates db/migrations/20240101120000_create_users.sql with this scaffold:

-- migrate:up

-- migrate:down

Fill it in:

-- migrate:up
CREATE TABLE users (
    id          BIGSERIAL PRIMARY KEY,
    username    VARCHAR(100) NOT NULL UNIQUE,
    email       VARCHAR(255) NOT NULL UNIQUE,
    created_at  TIMESTAMPTZ  NOT NULL DEFAULT NOW()
);

-- migrate:down
DROP TABLE users;

Apply it:

dbmate up

dbmate creates the schema_migrations table, records the migration, and rewrites db/schema.sql.

Roll it back:

dbmate down

Check status:

dbmate status

Output:

[ ] 20240101120000_create_users.sql

After applying:

[x] 20240101120000_create_users.sql

Adopting dbmate on an Existing Project

If your database already has schema and data, the process is to baseline it -- tell dbmate that the current state is already applied, without re-running anything.

Step 1 -- Dump the current schema into a migration file:

# PostgreSQL
pg_dump --schema-only myapp_dev > db/migrations/20240101000000_baseline.sql

Wrap the dump contents in the migration markers:

-- migrate:up
-- (paste pg_dump output here)

-- migrate:down
-- Intentionally empty -- baseline migration is not reversible

Step 2 -- Create the migrations table and mark it applied without running SQL:

dbmate create   # just creates the schema_migrations table

Then manually insert the baseline record:

INSERT INTO schema_migrations (version) VALUES ('20240101000000');

Step 3 -- Verify:

dbmate status
# [x] 20240101000000_baseline.sql

All future migrations layer on top of this baseline cleanly.


Every migration file has a timestamp prefix, an up block, and a down block — dbmate tracks which versions have been applied

Migration Examples

Add a column to an existing table

dbmate new add_phone_to_users
-- migrate:up
ALTER TABLE users ADD COLUMN phone VARCHAR(20);

-- migrate:down
ALTER TABLE users DROP COLUMN phone;

Modify an existing record (data migration)

dbmate handles data migrations just as well as schema migrations -- they're just SQL.

dbmate new normalize_admin_email
-- migrate:up
UPDATE users
SET email = LOWER(TRIM(email))
WHERE email <> LOWER(TRIM(email));

-- migrate:down
-- Data migrations are typically irreversible; document that explicitly.
-- No-op down intentional.

Rule of thumb: Always write a down block, even if it's a comment explaining why it's empty. Silent empty downs get confusing six months later.


Add a new record to a table (seed/reference data)

dbmate new seed_roles
-- migrate:up
INSERT INTO roles (name, description) VALUES
    ('admin',  'Full system access'),
    ('editor', 'Can create and modify content'),
    ('viewer', 'Read-only access');

-- migrate:down
DELETE FROM roles WHERE name IN ('admin', 'editor', 'viewer');

Create an index

dbmate new idx_users_email
-- migrate:up
CREATE INDEX CONCURRENTLY idx_users_email ON users (email);

-- migrate:down
DROP INDEX CONCURRENTLY idx_users_email;

CONCURRENTLY avoids locking the table on PostgreSQL. On MySQL, use ALGORITHM=INPLACE, LOCK=NONE for large tables.


Add a function

dbmate new fn_get_user_display_name
-- migrate:up
CREATE OR REPLACE FUNCTION fn_get_user_display_name(p_user_id BIGINT)
RETURNS TEXT
LANGUAGE SQL
STABLE
AS $$
    SELECT COALESCE(display_name, username)
    FROM   users
    WHERE  id = p_user_id;
$$;

-- migrate:down
DROP FUNCTION IF EXISTS fn_get_user_display_name(BIGINT);

Add a table

dbmate new create_audit_log
-- migrate:up
CREATE TABLE audit_log (
    id          BIGSERIAL    PRIMARY KEY,
    user_id     BIGINT       NOT NULL REFERENCES users(id) ON DELETE SET NULL,
    action      VARCHAR(100) NOT NULL,
    table_name  VARCHAR(100),
    record_id   BIGINT,
    payload     JSONB,
    created_at  TIMESTAMPTZ  NOT NULL DEFAULT NOW()
);

CREATE INDEX idx_audit_log_user_id    ON audit_log (user_id);
CREATE INDEX idx_audit_log_created_at ON audit_log (created_at DESC);

-- migrate:down
DROP TABLE IF EXISTS audit_log;

Other project files (scripts, config, docs)

dbmate only manages SQL migrations. Any other files in your project -- Python scripts, shell scripts, config files, documentation -- just check them into git normally alongside your migration files. There's no special dbmate handling needed; version control does that job.


Git Workflow: Dev → Stage → Prod

This is where dbmate's plain-SQL design pays off.

Branch strategy

main          ← production
└── staging   ← staging
    └── feature/add-audit-log  ← development

Development

git checkout -b feature/add-audit-log
dbmate new create_audit_log
# ... write the migration ...
dbmate up                          # test locally
git add db/migrations/ db/schema.sql
git commit -m "migration: add audit_log table"
git push origin feature/add-audit-log

Always commit db/schema.sql. It gives reviewers a plain diff of what the schema looks like after your migration -- no need to mentally stack migrations.

Promote to staging

git checkout staging
git merge feature/add-audit-log
git push origin staging

On the staging server (in CI/CD or manually):

export DATABASE_URL="postgres://user:pass@stage-db:5432/myapp_stage"
dbmate up

Verify:

dbmate status
# All migrations show [x]

Promote to production

git checkout main
git merge staging
git push origin main
git tag v1.4.0

On the production server:

export DATABASE_URL="postgres://user:pass@prod-db:5432/myapp_prod"
dbmate up

Environment .env files per server

Each server carries its own .env (never committed). Only DATABASE_URL changes between environments. Your migration files are identical everywhere -- that's the point.

CI/CD snippet (GitHub Actions example)

- name: Run migrations
  env:
    DATABASE_URL: ${{ secrets.STAGE_DATABASE_URL }}
  run: |
    curl -fsSL https://github.com/amacneil/dbmate/releases/latest/download/dbmate-linux-amd64 \
      -o /usr/local/bin/dbmate && chmod +x /usr/local/bin/dbmate
    dbmate up

Useful Commands Cheat Sheet

Command What it does
dbmate new <name> Create a timestamped migration file
dbmate up Apply all pending migrations
dbmate down Roll back the most recent migration
dbmate redo Roll back then re-apply the latest migration
dbmate status Show applied / pending state of all migrations
dbmate dump Regenerate db/schema.sql without running migrations
dbmate create Create the database if it doesn't exist

What to Watch Out For

Concurrent index creation needs a transaction workaround. By default, dbmate wraps each migration in a transaction. CREATE INDEX CONCURRENTLY cannot run inside a transaction on PostgreSQL. Disable the wrapper with the annotation:

-- migrate:up transaction:false
CREATE INDEX CONCURRENTLY idx_users_email ON users (email);

-- migrate:down transaction:false
DROP INDEX CONCURRENTLY IF EXISTS idx_users_email;

Don't rewrite existing migration files. Once a migration is committed and applied anywhere, it's permanent history. Write a new migration to fix it.

schema.sql is the source of truth for code review. Make it a policy that any PR touching db/migrations/ must also include an updated db/schema.sql.


The Bottom Line

dbmate earns its place in any stack by staying out of the way. One binary, plain SQL files, a schema_migrations table, and a schema dump. That's it. It doesn't care whether you're on PostgreSQL, MySQL, or SQLite. It doesn't care whether your team writes PHP, Python, or Go. And because every migration is just a SQL file with a timestamp in the name, git log db/migrations/ tells the complete story of your database -- no DSL required.