Dbmate is a database migration tool that will keep your database schema in sync across multiple developers and your production servers.
It is a standalone command line tool that can be used with Go, Node.js, Python, Ruby, PHP, Rust, C++, or any other language or framework you are using to write database-backed applications. This is especially helpful if you are writing multiple services in different languages, and want to maintain some sanity with consistent development tools.
For a comparison between dbmate and other popular database schema migration tools, please see Alternatives.
schema.sql file to easily diff schema changes in gitDATABASE_URL by default), or specified on the command line.env fileNPM
Install using NPM:
$ npm install --save-dev dbmate $ npx dbmate --help
macOS
Install using Homebrew:
$ brew install dbmate
Linux
Install the binary directly:
$ sudo curl -fsSL -o /usr/local/bin/dbmate https://github.com/amacneil/dbmate/releases/latest/download/dbmate-linux-amd64 $ sudo chmod +x /usr/local/bin/dbmate
Windows
Install using Scoop
$ scoop install dbmate
Docker
Docker images are published to GitHub Container Registry (ghcr.io/amacneil/dbmate).
Remember to set --network=host or see this comment for more tips on using dbmate with docker networking):
$ docker run --rm -it --network=host ghcr.io/amacneil/dbmate --help
If you wish to create or apply migrations, you will need to use Docker's bind mount feature to make your local working directory (pwd) available inside the dbmate container:
$ docker run --rm -it --network=host -v "$(pwd)/db:/db" ghcr.io/amacneil/dbmate new create_users_table
dbmate --help # print usage help dbmate new # generate a new migration file dbmate up # create the database (if it does not already exist) and run any pending migrations dbmate create # create the database dbmate drop # drop the database dbmate migrate # run any pending migrations dbmate rollback # roll back the most recent migration dbmate down # alias for rollback dbmate status # show the status of all migrations (supports --exit-code and --quiet) dbmate dump # write the database schema.sql file dbmate load # load schema.sql file to the database dbmate wait # wait for the database server to become available
The following options are available with all commands. You must use command line arguments in the order dbmate [global options] command [command options]. Most options can also be configured via environment variables (and loaded from your .env file, which is helpful to share configuration between team members).
--url, -u "protocol://host:port/dbname" - specify the database url directly. (env: DATABASE_URL)--env, -e "DATABASE_URL" - specify an environment variable to read the database connection URL from.--env-file ".env" - specify an alternate environment variables file(s) to load.--migrations-dir, -d "./db/migrations" - where to keep the migration files. (env: DBMATE_MIGRATIONS_DIR)--migrations-table "schema_migrations" - database table to record migrations in. (env: DBMATE_MIGRATIONS_TABLE)--schema-file, -s "./db/schema.sql" - a path to keep the schema.sql file. (env: DBMATE_SCHEMA_FILE)--no-dump-schema - don't auto-update the schema.sql file on migrate/rollback (env: DBMATE_NO_DUMP_SCHEMA)--strict - fail if migrations would be applied out of order (env: DBMATE_STRICT)--wait - wait for the db to become available before executing the subsequent command (env: DBMATE_WAIT)--wait-timeout 60s - timeout for --wait flag (env: DBMATE_WAIT_TIMEOUT)Dbmate locates your database using the DATABASE_URL environment variable by default. If you are writing a twelve-factor app, you should be storing all connection strings in environment variables.
To make this easy in development, dbmate looks for a .env file in the current directory, and treats any variables listed there as if they were specified in the current environment (existing environment variables take preference, however).
If you do not already have a .env file, create one and add your database connection URL:
$ cat .env DATABASE_URL="postgres://postgres@127.0.0.1:5432/myapp_development?sslmode=disable"
DATABASE_URL should be specified in the following format:
protocol://username:password@host:port/database_name?options
protocol must be one of mysql, postgres, postgresql, sqlite, sqlite3, clickhouseusername and password must be URL encoded (you will get an error if you use special charactors)host can be either a hostname or IP addressoptions are driver-specific (refer to the underlying Go SQL drivers if you wish to use these)Dbmate can also load the connection URL from a different environment variable. For example, before running your test suite, you may wish to drop and recreate the test database. One easy way to do this is to store your test database connection URL in the TEST_DATABASE_URL environment variable:
$ cat .env DATABASE_URL="postgres://postgres@127.0.0.1:5432/myapp_dev?sslmode=disable" TEST_DATABASE_URL="postgres://postgres@127.0.0.1:5432/myapp_test?sslmode=disable"
You can then specify this environment variable in your test script (Makefile or similar):
$ dbmate -e TEST_DATABASE_URL drop Dropping: myapp_test $ dbmate -e TEST_DATABASE_URL --no-dump-schema up Creating: myapp_test Applying: 20151127184807_create_users_table.sql Applied: 20151127184807_create_users_table.sql in 123µs
Alternatively, you can specify the url directly on the command line:
$ dbmate -u "postgres://postgres@127.0.0.1:5432/myapp_test?sslmode=disable" up
The only advantage of using dbmate -e TEST_DATABASE_URL over dbmate -u $TEST_DATABASE_URL is that the former takes advantage of dbmate's automatic .env file loading.
When connecting to Postgres, you may need to add the sslmode=disable option to your connection string, as dbmate by default requires a TLS connection (some other frameworks/languages allow unencrypted connections by default).
DATABASE_URL="postgres://username:password@127.0.0.1:5432/database_name?sslmode=disable"
A socket or host parameter can be specified to connect through a unix socket (note: specify the directory only):
DATABASE_URL="postgres://username:password@/database_name?socket=/var/run/postgresql"
A search_path parameter can be used to specify the current schema while applying migrations, as well as for dbmate's schema_migrations table.
If the schema does not exist, it will be created automatically. If multiple comma-separated schemas are passed, the first will be used for the schema_migrations table.
DATABASE_URL="postgres://username:password@127.0.0.1:5432/database_name?search_path=myschema"
DATABASE_URL="postgres://username:password@127.0.0.1:5432/database_name?search_path=myschema,public"
DATABASE_URL="mysql://username:password@127.0.0.1:3306/database_name"
A socket parameter can be specified to connect through a unix socket:
DATABASE_URL="mysql://username:password@/database_name?socket=/var/run/mysqld/mysqld.sock"
SQLite databases are stored on the filesystem, so you do not need to specify a host. By default, files are relative to the current directory. For example, the following will create a database at ./db/database.sqlite3:
DATABASE_URL="sqlite:db/database.sqlite3"
To specify an absolute path, add a forward slash to the path. The following will create a database at /tmp/database.sqlite3:
DATABASE_URL="sqlite:/tmp/database.sqlite3"
DATABASE_URL="clickhouse://username:password@127.0.0.1:9000/database_name"
To work with ClickHouse cluster, there are 4 connection query parameters that can be supplied:
on_cluster - Indicataion to use cluster statements and replicated migration table. (default: false) If this parameter is not supplied, other cluster related query parameters are ignored.DATABASE_URL="clickhouse://username:password@127.0.0.1:9000/database_name?on_cluster" DATABASE_URL="clickhouse://username:password@127.0.0.1:9000/database_name?on_cluster=true"
cluster_macro (Optional) - Macro value to be used for ON CLUSTER statements and for the replciated migration table engine zookeeper path. (default: {cluster})DATABASE_URL="clickhouse://username:password@127.0.0.1:9000/database_name?on_cluster&cluster_macro={my_cluster}"
replica_macro (Optional) - Macro value to be used for the replica name in the replciated migration table engine. (default: {replica})DATABASE_URL="clickhouse://username:password@127.0.0.1:9000/database_name?on_cluster&replica_macro={my_replica}"
zoo_path (Optional) - The path to the table migration in ClickHouse/Zoo Keeper. (default: /clickhouse/tables/<cluster_macro>/{table})DATABASE_URL="clickhouse://username:password@127.0.0.1:9000/database_name?on_cluster&zoo_path=/zk/path/tables"
See other supported connection options.
Follow the following format for DATABASE_URL when connecting to actual BigQuery in GCP:
bigquery://projectid/location/dataset
projectid (mandatory) - Project ID
dataset (mandatory) - Dataset name within the Project
location (optional) - Where Dataset is created
NOTE: Follow this doc on how to set GOOGLE_APPLICATION_CREDENTIALS environment variable for proper Authentication
Follow the following format if trying to connect to a custom endpoint e.g. BigQuery Emulator
bigquery://host:port/projectid/location/dataset?disable_auth=true
disable_auth (optional) - Pass true to skip Authentication, use only for testing and connecting to emulator.
Spanner support is currently limited to databases using the PostgreSQL Dialect, which must be chosen during database creation. For future Spanner with GoogleSQL support, see this discussion.
Spanner with the Postgres interface requires that the PGAdapter is running. Use the following format for DATABASE_URL, with the host and port set to where the PGAdapter is running:
DATABASE_URL="spanner-postgres://127.0.0.1:5432/database_name?sslmode=disable"
Note that specifying a username and password is not necessary, as authentication is handled by the PGAdapter (they will be ignored by the PGAdapter if specified).
Other options of the postgres driver are supported.
Spanner also doesn't allow DDL to be executed inside explicit transactions. You must therefore specify transaction:false on migrations that include DDL:
-- migrate:up transaction:false CREATE TABLE ... -- migrate:down transaction:false DROP TABLE ...
Schema dumps are not currently supported, as pg_dump uses functions that are not provided by Spanner.
To create a new migration, run dbmate new create_users_table. You can name the migration anything you like. This will create a file db/migrations/20151127184807_create_users_table.sql in the current directory:
-- migrate:up -- migrate:down
To write a migration, simply add your SQL to the migrate:up section:
-- migrate:up create table users ( id integer, name varchar(255), email varchar(255) not null ); -- migrate:down
Note: Migration files are named in the format
[version]_[description].sql. Only the version (defined as all leading numeric characters in the file name) is recorded in the database, so you can safely rename a migration file without having any effect on its current application state.
Run dbmate up to run any pending migrations.
$ dbmate up Creating: myapp_development Applying: 20151127184807_create_users_table.sql Applied: 20151127184807_create_users_table.sql in 123µs Writing: ./db/schema.sql
Note:
dbmate upwill create the database if it does not already exist (assuming the current user has permission to create databases). If you want to run migrations without creating the database, rundbmate migrate.
Pending migrations are always applied in numerical order. However, dbmate does not prevent migrations from being applied out of order if they are committed independently (for example: if a developer has been working on a branch for a long time, and commits a migration which has a lower version number than other already-applied migrations, dbmate will simply apply the pending migration). See #159 for a more detailed explanation.
By default, dbmate doesn't know how to roll back a migration. In development, it's often useful to be able to revert your database to a previous state. To accomplish this, implement the migrate:down section:
-- migrate:up create table users ( id integer, name varchar(255), email varchar(255) not null ); -- migrate:down drop table users;
Run dbmate rollback to roll back the most recent migration:
$ dbmate rollback Rolling back: 20151127184807_create_users_table.sql Rolled


免费创建高清无水印Sora视频
Vora是一个免费创建高清无水印Sora视频的AI工具


最适合小白的AI自动化工作流平台
无需编码,轻松生成可复用、可变现的AI自动化工作流

大模型驱动的Excel数据处理工具
基于大模型交互的表格处理系统,允许用户通过对话方式完成数据整理和可视化分析。系统采用机器学习算法解析用户指令,自动执行排序、公式计算和数据透视等操作,支持多种文件格式导入导出。数据处理响应速度保持在0.8秒以内,支持超过100万行数据的即时分析。


AI辅助编程,代码自动修复
Trae是一种自适应的集成开发环境(IDE),通过自动化和多元协作改变开发流程。利用Trae,团队能够更快速、精确地编写和部署代码,从而提高编程效率和项目交付速度。Trae具备上下文感知和代码自动完成功能,是提升开发效率的理想工具。


AI论文写作指导平台
AIWritePaper论文写作是一站式AI论文写作辅助工具,简化了选题、文献检索至论文撰写的整个过程。通过简单设定,平台可快速生成高质量论文大纲和全文,配合图表、参考文献等一应俱全,同时提供开题报告和答辩PPT等增值服务,保障数据安全,有效提升写作效率和论文质量。


AI一键生成PPT,就用博思AIPPT!
博思AIPPT,新一代的AI生成PPT平台,支持智能生成PPT、AI美化PPT、文本&链接生成PPT、导入Word/PDF/Markdown文档生成PPT等,内置海量精美PPT模板,涵盖商务、教育、科技等不同风格,同时针对每个页面提供多种版式,一键自适应切换,完美适配各种办公场景。


AI赋能电商视觉革命,一站式智能商拍平台
潮际好麦深耕服装行业,是国内AI试衣效果最好的软件。使用先进AIGC能力为电商卖家批量提供优质的、低成本的商拍图。合作品牌有Shein、Lazada、安踏、百丽等65个国内外头部品牌,以及国内10万+淘宝、天猫、京东等主流平台的品牌商家,为卖家节省将近85%的出图成本,提升约3倍出图效率,让品牌能够快速上架。


企业专属的AI法律顾问
iTerms是法大大集团旗下法律子品牌,基于最先进的大语言模型(LLM)、专业的法律知识库和强大的智能体架构,帮助企业扫清合规障碍,筑牢风控防线,成为您企业专属的AI法律顾问。


稳定高效的流量提升解决方案,助力品牌曝光
稳定高效的流量提升解决方案,助力品牌曝光


最新版Sora2模型免费使用,一键生成无水印视频
最新版Sora2模型免费使用,一键生成无水印视频
最新AI工具、AI资讯
独家AI资源、AI项目落地

微信扫一扫关注公众号