20

I'm not sure how to define a create schema foo migration? My Model looks like this (I'm using Flask-Migrate):

class MyTable(db.Model):
    __tablename__ = "my_table"
    __table_args__ = {"schema": "foo"}

    id = ColumnPrimKey()
    name = Column(Text, unique=True, nullable=False)

When I execute mange db upgrade I get a failure because the schema "foo" does not exist. How can I add a migration for the schema with SQLAlchemy and Alembic?

s5s
  • 11,159
  • 21
  • 74
  • 121

4 Answers4

32

I accomplished this by modifying the migration upgrade command to first run:

op.execute("create schema foo")

And in the downgrade function

op.execute("drop schema foo")

So the whole migration file looks something like:

from alembic import op
import sqlalchemy as sa


# revision identifiers, used by Alembic.
revision = '6c82c972c61e'
down_revision = '553260b3e828'
branch_labels = None
depends_on = None


def upgrade():
    op.execute("create schema foo")
    ...

def downgrade():
    ...
    op.execute("drop schema foo")
buck
  • 1,502
  • 1
  • 20
  • 23
  • Is there some way to make this work with `sqlalchemy.event`? For example, `event.listen(Base.metadata, "before_create", CreateSchema("foo"))`. – maxcountryman Jul 18 '19 at 23:18
  • This solves the migration, but if you have another migration, it won't detect your schema again and will generate all the tables... That can be troublesome. Is there a way to detect the changes in a different schema? – Pstr Mar 19 '20 at 21:26
  • 1
    @Pstr You can specify the schema attribute on your metadata class either `metadata = Metadata(schema=)` or `Base = declarative_base(schema=)` – Bin H. Dec 03 '20 at 20:17
  • As an answer to my own question here, alembic will acknowledge the schemas if you set the argument `include_schemas=True,` in the `context.configure()` function of the alembic/env.py file. Not sure why this setting is not on as default, though – Pstr Aug 04 '21 at 13:19
  • I don't understand why `revision --autogenerate` doesn't detect and create these schemas. Is there some technical difficulty I'm not aware of? (The issue of where to put the `alembic_version` which dominates some of these conversations is a red herring to this discussion. This is about autogenerating schemas for tables to go into.) – LondonRob Apr 19 '23 at 10:46
3

A potential issue with the accepted answer is that, for an initial migration, Alembic might have trouble finding a place to create the alembic_version. This is because because op.execute("create schema foo") is only executed after Alembic attempts to find its alembic_version table. The error pops up as:

sqlalchemy.exc.ProgrammingError: (psycopg2.errors.InvalidSchemaName) schema "foo" does not exist

The easy way out would be to have the alembic_version table live in another schema passing version_table_schema to context.configure() (docs).

However, in many cases one might want to simultaneously (i) create a schema from scratch as part of an initial migration (e.g. to set up a testing environment) and (ii) have the alembic_version table in the target schema. In these cases, an alternative approach is to delegate the schema creation to env.py. Example:

# env.py
from sqlalchemy import create_engine

from bar import create_database_with_schema_if_not_exists

SQLALCHEMY_DATABASE_URI = ...
schema = "foo"
engine = create_engine(SQLALCHEMY_DATABASE_URI)
create_database_with_schema_if_not_exists(engine, schema)
...
# bar.py
import sqlalchemy
from sqlalchemy_utils import create_database, database_exists

def create_database_with_schema_if_not_exists(engine, schema):
    if not database_exists(engine.url):
        create_database(engine.url)
    if not engine.dialect.has_schema(engine, schema):
        engine.execute(sqlalchemy.schema.CreateSchema(schema))
swimmer
  • 1,971
  • 2
  • 17
  • 28
1

Another option is to add the following function to modify the MigrationScript directives in env.py:

from alembic import operations

def process_revision_directives(context, revision, directives):
    """Modify the MigrationScript directives to create schemata as required.
    """
    script = directives[0]
    for schema in frozenset(i.schema for i in target_metadata.tables.values()):
        script.upgrade_ops.ops.insert(
            0, operations.ops.ExecuteSQLOp(f"CREATE SCHEMA IF NOT EXISTS {schema}"))
        script.downgrade_ops.ops.append(
            operations.ops.ExecuteSQLOp(f"DROP SCHEMA IF EXISTS {schema} RESTRICT"))

then add process_revision_directives=process_revision_directives in context.configure.

auxsvr
  • 216
  • 5
  • 8
1

Since the alembic_version table is created inside the new schema none of the other approaches worked for me using mssql+pyodbc dialect.

This one does:

with context.begin_transaction():
    context.execute(
        f"""
        IF NOT EXISTS (SELECT * FROM sys.schemas WHERE name = '{schema}')
        BEGIN 
          EXEC('CREATE SCHEMA {schema}') 
        END
        """
    )
    context.run_migrations()
HeyMan
  • 1,529
  • 18
  • 32