Database migrations should be boring. Write a change, version it, apply it consistently across every environment. Somehow, most tools make this harder than it needs to be. dbmate doesn't.
dbmate was created by Adrian Macneil and first released around 2016. Macneil had previously built and maintained other migration tools and wanted something that was:
The project lives at github.com/amacneil/dbmate and is actively maintained. It stores migration state in a schema_migrations table it manages itself, and it keeps a schema.sql dump after every run -- which is the detail that makes it genuinely useful in a team workflow.
brew install dbmate
Verify:
dbmate --version
Download the latest release binary from the GitHub releases page. Choose the dbmate-windows-amd64.exe asset.
Option 1 -- Manual:
dbmate.exePATH (e.g., C:\tools\)PATH via System Properties → Environment VariablesOption 2 -- Scoop:
scoop install dbmate
Option 3 -- Winget:
winget install dbmate
Verify:
dbmate --version
dbmate reads the database URL from a .env file in the project root, or from the DATABASE_URL environment variable directly.
.env (never commit this file):
DATABASE_URL="postgres://user:password@localhost:5432/myapp_dev?sslmode=disable"
For MySQL:
DATABASE_URL="mysql://user:password@localhost:3306/myapp_dev"
Add .env to .gitignore immediately:
echo ".env" >> .gitignore
Instead, commit a .env.example with placeholder values so teammates know what's needed.
dbmate expects (and creates) a db/ directory:
project/
├── db/
│ ├── migrations/
│ │ ├── 20240101120000_create_users.sql
│ │ └── 20240215083000_add_email_index.sql
│ └── schema.sql ← auto-generated after every migrate
├── .env ← local only, gitignored
├── .env.example ← committed, placeholder values
└── .gitignore
Initialize the migrations directory:
dbmate new create_users
This generates db/migrations/20240101120000_create_users.sql with this scaffold:
-- migrate:up
-- migrate:down
Fill it in:
-- migrate:up
CREATE TABLE users (
id BIGSERIAL PRIMARY KEY,
username VARCHAR(100) NOT NULL UNIQUE,
email VARCHAR(255) NOT NULL UNIQUE,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
-- migrate:down
DROP TABLE users;
Apply it:
dbmate up
dbmate creates the schema_migrations table, records the migration, and rewrites db/schema.sql.
Roll it back:
dbmate down
Check status:
dbmate status
Output:
[ ] 20240101120000_create_users.sql
After applying:
[x] 20240101120000_create_users.sql
If your database already has schema and data, the process is to baseline it -- tell dbmate that the current state is already applied, without re-running anything.
Step 1 -- Dump the current schema into a migration file:
# PostgreSQL
pg_dump --schema-only myapp_dev > db/migrations/20240101000000_baseline.sql
Wrap the dump contents in the migration markers:
-- migrate:up
-- (paste pg_dump output here)
-- migrate:down
-- Intentionally empty -- baseline migration is not reversible
Step 2 -- Create the migrations table and mark it applied without running SQL:
dbmate create # just creates the schema_migrations table
Then manually insert the baseline record:
INSERT INTO schema_migrations (version) VALUES ('20240101000000');
Step 3 -- Verify:
dbmate status
# [x] 20240101000000_baseline.sql
All future migrations layer on top of this baseline cleanly.

dbmate new add_phone_to_users
-- migrate:up
ALTER TABLE users ADD COLUMN phone VARCHAR(20);
-- migrate:down
ALTER TABLE users DROP COLUMN phone;
dbmate handles data migrations just as well as schema migrations -- they're just SQL.
dbmate new normalize_admin_email
-- migrate:up
UPDATE users
SET email = LOWER(TRIM(email))
WHERE email <> LOWER(TRIM(email));
-- migrate:down
-- Data migrations are typically irreversible; document that explicitly.
-- No-op down intentional.
Rule of thumb: Always write a down block, even if it's a comment explaining why it's empty. Silent empty downs get confusing six months later.
dbmate new seed_roles
-- migrate:up
INSERT INTO roles (name, description) VALUES
('admin', 'Full system access'),
('editor', 'Can create and modify content'),
('viewer', 'Read-only access');
-- migrate:down
DELETE FROM roles WHERE name IN ('admin', 'editor', 'viewer');
dbmate new idx_users_email
-- migrate:up
CREATE INDEX CONCURRENTLY idx_users_email ON users (email);
-- migrate:down
DROP INDEX CONCURRENTLY idx_users_email;
CONCURRENTLYavoids locking the table on PostgreSQL. On MySQL, useALGORITHM=INPLACE, LOCK=NONEfor large tables.
dbmate new fn_get_user_display_name
-- migrate:up
CREATE OR REPLACE FUNCTION fn_get_user_display_name(p_user_id BIGINT)
RETURNS TEXT
LANGUAGE SQL
STABLE
AS $$
SELECT COALESCE(display_name, username)
FROM users
WHERE id = p_user_id;
$$;
-- migrate:down
DROP FUNCTION IF EXISTS fn_get_user_display_name(BIGINT);
dbmate new create_audit_log
-- migrate:up
CREATE TABLE audit_log (
id BIGSERIAL PRIMARY KEY,
user_id BIGINT NOT NULL REFERENCES users(id) ON DELETE SET NULL,
action VARCHAR(100) NOT NULL,
table_name VARCHAR(100),
record_id BIGINT,
payload JSONB,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
CREATE INDEX idx_audit_log_user_id ON audit_log (user_id);
CREATE INDEX idx_audit_log_created_at ON audit_log (created_at DESC);
-- migrate:down
DROP TABLE IF EXISTS audit_log;
dbmate only manages SQL migrations. Any other files in your project -- Python scripts, shell scripts, config files, documentation -- just check them into git normally alongside your migration files. There's no special dbmate handling needed; version control does that job.
This is where dbmate's plain-SQL design pays off.
main ← production
└── staging ← staging
└── feature/add-audit-log ← development
git checkout -b feature/add-audit-log
dbmate new create_audit_log
# ... write the migration ...
dbmate up # test locally
git add db/migrations/ db/schema.sql
git commit -m "migration: add audit_log table"
git push origin feature/add-audit-log
Always commit
db/schema.sql. It gives reviewers a plain diff of what the schema looks like after your migration -- no need to mentally stack migrations.
git checkout staging
git merge feature/add-audit-log
git push origin staging
On the staging server (in CI/CD or manually):
export DATABASE_URL="postgres://user:pass@stage-db:5432/myapp_stage"
dbmate up
Verify:
dbmate status
# All migrations show [x]
git checkout main
git merge staging
git push origin main
git tag v1.4.0
On the production server:
export DATABASE_URL="postgres://user:pass@prod-db:5432/myapp_prod"
dbmate up
.env files per serverEach server carries its own .env (never committed). Only DATABASE_URL changes between environments. Your migration files are identical everywhere -- that's the point.
- name: Run migrations
env:
DATABASE_URL: ${{ secrets.STAGE_DATABASE_URL }}
run: |
curl -fsSL https://github.com/amacneil/dbmate/releases/latest/download/dbmate-linux-amd64 \
-o /usr/local/bin/dbmate && chmod +x /usr/local/bin/dbmate
dbmate up
| Command | What it does |
|---|---|
dbmate new <name> |
Create a timestamped migration file |
dbmate up |
Apply all pending migrations |
dbmate down |
Roll back the most recent migration |
dbmate redo |
Roll back then re-apply the latest migration |
dbmate status |
Show applied / pending state of all migrations |
dbmate dump |
Regenerate db/schema.sql without running migrations |
dbmate create |
Create the database if it doesn't exist |
Concurrent index creation needs a transaction workaround. By default, dbmate wraps each migration in a transaction. CREATE INDEX CONCURRENTLY cannot run inside a transaction on PostgreSQL. Disable the wrapper with the annotation:
-- migrate:up transaction:false
CREATE INDEX CONCURRENTLY idx_users_email ON users (email);
-- migrate:down transaction:false
DROP INDEX CONCURRENTLY IF EXISTS idx_users_email;
Don't rewrite existing migration files. Once a migration is committed and applied anywhere, it's permanent history. Write a new migration to fix it.
schema.sql is the source of truth for code review. Make it a policy that any PR touching db/migrations/ must also include an updated db/schema.sql.
dbmate earns its place in any stack by staying out of the way. One binary, plain SQL files, a schema_migrations table, and a schema dump. That's it. It doesn't care whether you're on PostgreSQL, MySQL, or SQLite. It doesn't care whether your team writes PHP, Python, or Go. And because every migration is just a SQL file with a timestamp in the name, git log db/migrations/ tells the complete story of your database -- no DSL required.