Here's a solution that I use. It's a check that I have implemented as a test.
from alembic.autogenerate import compare_metadata from alembic.command import upgrade from alembic.runtime.migration import MigrationContext from alembic.config import Config from models.base import Base def test_migrations_sane(): "" " This test ensures that models defined by SQLAlchemy match what alembic migrations think the database should look like.If these are different, then once we have constructed the database via Alembic(via running all migrations) alembic will generate a set of changes to modify the database to match the schema defined by SQLAlchemy models.If these are the same, the set of changes is going to be empty.Which is exactly what we want to check. "" " engine = "SQLAlchemy DB Engine instance" try: with engine.connect() as connection: alembic_conf_file = "location of alembic.ini" alembic_config = Config(alembic_conf_file) upgrade(alembic_config, "head") mc = MigrationContext.configure(connection) diff = compare_metadata(mc, Base.metadata) assert diff == [] finally: with engine.connect() as connection: # Resetting the DB connection.execute( "" " DROP SCHEMA public CASCADE; CREATE SCHEMA public; GRANT ALL ON SCHEMA public TO postgres; GRANT ALL ON SCHEMA public TO public; "" " )
Here's a solution that I use. It's a anycodings_continuous-integration check that I have implemented as a test.,What would be the best practice for anycodings_python implementing this check in CI?,There is alembic-autogen-check tool anycodings_continuous-integration available: anycodings_continuous-integration https://pypi.org/project/alembic-autogen-check/ anycodings_continuous-integration Although it requires a database to be anycodings_continuous-integration created to check upon.,I'm trying to improve CI pipeline to prevent anycodings_python situations where SQLAlchemy models are added anycodings_python or changed, but no Alembic migration is anycodings_python written or generated by the commit author anycodings_python from hitting the production branch.
Here's a solution that I use. It's a anycodings_continuous-integration check that I have implemented as a test.
from alembic.autogenerate import compare_metadata from alembic.command import upgrade from alembic.runtime.migration import MigrationContext from alembic.config import Config from models.base import Base def test_migrations_sane(): "" " This test ensures that models defined by SQLAlchemy match what alembic migrations think the database should look like.If these are different, then once we have constructed the database via Alembic(via running all migrations) alembic will generate a set of changes to modify the database to match the schema defined by SQLAlchemy models.If these are the same, the set of changes is going to be empty.Which is exactly what we want to check. "" " engine = "SQLAlchemy DB Engine instance" try: with engine.connect() as connection: alembic_conf_file = "location of alembic.ini" alembic_config = Config(alembic_conf_file) upgrade(alembic_config, "head") mc = MigrationContext.configure(connection) diff = compare_metadata(mc, Base.metadata) assert diff == [] finally: with engine.connect() as connection: # Resetting the DB connection.execute( "" " DROP SCHEMA public CASCADE; CREATE SCHEMA public; GRANT ALL ON SCHEMA public TO postgres; GRANT ALL ON SCHEMA public TO public; "" " )
Alembic can view the status of the database and compare against the table metadata in the application, generating the “obvious” migrations based on a comparison. This is achieved using the --autogenerate option to the alembic revision command, which places so-called candidate migrations into our new migrations file. We review and modify these by hand as needed, then proceed normally.,We can then use the alembic revision command in conjunction with the --autogenerate option. Suppose our MetaData contained a definition for the account table, and the database did not. We’d get output like:,Now, when we inevitably refactor our application to move MyCustomType somewhere else, we only need modify the myapp.migration_types module, instead of searching and replacing all instances within our migration scripts.,The above custom type has a long and cumbersome name based on the use of __module__ directly, which also implies that lots of imports would be needed in order to accommodate lots of types. For this reason, it is recommended that user-defined types used in migration scripts be made available from a single module. Suppose we call it myapp.migration_types:
# add your model 's MetaData object here # for 'autogenerate' support # from myapp import mymodel # target_metadata = mymodel.Base.metadata target_metadata = None
from myapp.mymodel
import Base
target_metadata = Base.metadata
def run_migrations_online():
engine = engine_from_config(
config.get_section(config.config_ini_section), prefix = 'sqlalchemy.')
with engine.connect() as connection:
context.configure(
connection = connection,
target_metadata = target_metadata
)
with context.begin_transaction():
context.run_migrations()
$ alembic revision--autogenerate - m "Added account table"
INFO[alembic.context] Detected added table 'account'
Generating / path / to / foo / alembic / versions / 27 c6a30d7c24.py...done
"" "empty message Revision ID: 27 c6a30d7c24 Revises: None Create Date: 2011 - 11 - 08 11: 40: 27.089406 "" " # revision identifiers, used by Alembic. revision = '27c6a30d7c24' down_revision = None from alembic import op import sqlalchemy as sa def upgrade(): # # # commands auto generated by Alembic - please adjust!# # # op.create_table( 'account', sa.Column('id', sa.Integer()), sa.Column('name', sa.String(length = 50), nullable = False), sa.Column('description', sa.VARCHAR(200)), sa.Column('last_transaction_date', sa.DateTime()), sa.PrimaryKeyConstraint('id') ) # # # end Alembic commands # # # def downgrade(): # # # commands auto generated by Alembic - please adjust!# # # op.drop_table("account") # # # end Alembic commands # # #
from myapp.mymodel1
import Model1Base
from myapp.mymodel2
import Model2Base
target_metadata = [Model1Base.metadata, Model2Base.metadata]
This will output a new migration into alembic/versions/. You will need to verify that this migration produced the desired output. While still in the dev-shell, you can run the following command to see an output of the SQL that will be generated.,Database migrations are automatically applied to production instances via the command alembic upgrade head in the postinst script in the securedrop-app-code Debian package. You do not need to worry about when or how these migrations are applied.,In the securedrop/ directory, the file alembic.ini contains the configuration needed to run alembic commands, and the directory alembic/ contains the Python code that executes migrations.,Once you are satisfied with your new model, alembic can auto-generate migrations using SQLAlchemy metadata and comparing it to the schema of an up-to-date SQLite database. To generate a new migration use the following steps.
.├──alembic│├── env.py│├── script.py.mako│└── versions│├── 15 ac9509fc68_init.py│└── faac8092c123_enable_security_pragmas.py└── alembic.ini
cd securedrop /
. / bin / dev - shell
source bin / dev - deps
maybe_create_config_py
. / bin / new - migration 'my migration message'
alembic upgrade head--sql
tests / migrations / ├──__init__.py├── migration_15ac9509fc68.py└── migration_faac8092c123.py
class UpgradeTester: def __init__(self, config): '' 'This function MUST accept an argument named `config`. You will likely want to save a reference to the config in your class so you can access the database later. '' ' self.config = config def load_data(self): '' 'This function loads data into the database and filesystem. It is executed before the upgrade. '' ' pass def check_upgrade(self): '' 'This function is run after the upgrade and verifies the state of the database or filesystem.It MUST raise an exception if the check fails. '' ' pass class DowngradeTester: def __init__(self, config): '' 'This function MUST accept an argument named `config`. You will likely want to save a reference to the config in your class so you can access the database later. '' ' self.config = config def load_data(self): '' 'This function loads data into the database and filesystem. It is executed before the downgrade. '' ' pass def check_downgrade(self): '' 'This function is run after the downgrade and verifies the state of the database or filesystem.It MUST raise an exception if the check fails. '' ' pass
The -m tag names our migration "baseline". Notice that a migration script has been created in the alembic/versions/ subdirectory for us! This is where we will write our migrations. Alembic created a unique id and empty upgrade and downgrade functions.,Notice that Alembic has generated a number of things for us, including the alembic subdirectory and the alembic.ini file. Our migration scripts will appear inside the alembic/versions folder, but first we must tell Alembic to talk to our database.,Finally, we are ready to execute the migration. Run the code below in your terminal, which ensures that our database is up-to-date with the most recent revision. python alembic upgrade head ,Add a model class for Song in the models.py file. Make sure to include the constructor function that initializes each Song instance with a name and length, just as we specified in the migration.
Below the Artist model, include the following lines of code to configure our new database:
engine = create_engine('sqlite:///artists.db')
Base.metadata.create_all(engine)
Below the Artist model, include the following lines of code to configure our new database:
engine = create_engine('sqlite:///artists.db')
Base.metadata.create_all(engine)
- In your terminal run the following command to initialize the Alembic environment:
alembic init alembic
- Set "sqlalchemy.url" (line 38) in
alembic.ini
to point to our database. Our database name is the string we provided to thecreate_engine
function when we first configured the database.
sqlalchemy.url = sqlite: ///artists.db
- Run the command below to generate a migration script:
alembic revision - m "baseline"
The -m
tag names our migration "baseline". Notice that a migration script has been created in the alembic/versions/
subdirectory for us! This is where we will write our migrations. Alembic created a unique id and empty upgrade
and downgrade
functions.
# alembic/versions/<auto_generated_revision_id>_baseline.py
from alembic import op
import sqlalchemy as sa
def upgrade():
pass
def downgrade():
pass
How to modify the database schema ,A Codechecker server should be started with the previous database schema version.,With the CodeChecker server --db-status all the database statuses for all of the product databases can be checked.,Running the CodeChecker server --db-status all command with the new CodeChecker release will show you if database upgrade is needed for the new release.
NOTE: Use the same arguments which were used to start the server to check the status. It is required to find the used configuration database.
$ CodeChecker server--db - status all[15: 01] - Checking configuration database...[15: 01] - Database is OK.
-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -
Product endpoint | Database status | Database location | Schema version in the database | Schema version in the package
-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -
Default | Database is up to date. | ~/.codechecker/Default.sqlite | 82 ca43f05c10(up to date) | 82 ca43f05c10
Default2 | Database schema mismatch!Possible to upgrade. | ~/.codechecker/Default2.sqlite | 82 ca43f05c10 | f1f7600168dc
-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -
Schema upgrade can be done for each product independently or in a row for all
of the products with the CodeChecker server --db-upgrade-schema PRODUCT_NAME
command.
$ CodeChecker server--db - upgrade - schema Default[15: 01] - Checking configuration database...[15: 01] - Database is OK.
[15: 01] - Preparing schema upgrade
for Default
[WARNING][15: 01] - Please note after migration only newer CodeChecker versions can be used to start the server[WARNING][15: 01] - It is advised to make a full backup of your run databases.
[15: 01] - === === === === === === === === [15: 01] - Upgrading: Default[15: 01] - Database schema mismatch: migration is available.
Do you want to upgrade to new schema ? Y(es) / n(o) y
Upgrading schema...
Done.
Database is OK.
[15: 01] - === === === === === === === ===