A tutorial on Python's Flask framework and database connection

Source: Internet
Author: User
Tags local time python script
command-line mode to run a Python script

In this section, we will write some simple database management scripts. Let's review how to execute the Python script from the command line.

If you have Linux or OS X operating systems, you need to have permission to execute scripts. For example:

chmod a+x script.py

The script has a command line that points to the use of the interpreter. Once the script has been given execute permissions, it can be executed from the command line, like this:

However, it is not possible to do so on a Windows system, you must provide the Python interpreter as a required parameter, such as:

The code is as follows:

Flask/scripts/python script.py

To avoid errors in the Python interpreter path input, you can add your folder microoblog/flask/scripts to the system path to ensure that the Python interpreter is displayed properly.

From now on, the statement on Linux/os X is concise. If you use Windows system, remember to convert the statement.

Working with databases in flask

We will use the Flask-sqlalchemy extension to manage the database. Provided by the SQLAlchemy project, a plug-in that encapsulates a relational object mapping (ORM).

Orms allows the database program to represent and SQL statements in the same way as objects. Object-oriented operations are converted to database commands by ORM. This means that without the SQL statement, let Flask-sqlalchemy execute the SQL statement for us.


Most database tutorials cover the method of creating and using a database, but do not adequately address the issue of database updates when an application expands. Typically, you delete the old database and then create a new database to achieve the effect of the update, so that all the data is lost. If this data is very laborious to create, then we have to write the import and export script.

Fortunately, we have a better plan.

We can now use Sqlalchemy-migrate to do a database migration update, although it increases the load on the database startup, but it is worth the small price, after all, we do not have to worry about manually migrating the database problem.

Theoretical study finished, let's start!


Our applet uses the SQLite database. SQLite is the best choice for a small program database, a database that can be stored as a single file.

Add a new configuration entry (fileconfig.py) to our configuration file:

Import Osbasedir = Os.path.abspath (Os.path.dirname (__file__)) Sqlalchemy_database_uri = ' sqlite:///' + os.path.join ( Basedir, ' app.db ') Sqlalchemy_migrate_repo = Os.path.join (basedir, ' db_repository ')

Sqlalchemy_database_uri is the flask-sqlalchemy required extension. This is the path to our database file.

Sqlalchemy_migrate_repo is the folder used to store sqlalchemy-migrate database files.

Finally, initializing the application also requires initializing the database. Here is the upgraded init file (fileapp/__init):

From flask import flaskfrom flask.ext.sqlalchemy import SQLAlchemy app = Flask (__name__) app.config.from_object (' config ' ) db = SQLAlchemy (APP) from the app import views, models

Note the generated script has changed 2 places. We now start creating the ADB object for the database, referencing the new module. Write this module right away.

Database model

The data that we store in the database is mapped to objects inside some classes through the database model layer, and the ORM layer maps to the fields corresponding to the database based on the class object.

Let's create a model that maps to users. Using the WWW SQL Designer tool, we created an icon representing the users table:

The ID field is usually used as the primary key in all models, and each user in the database has a specified unique ID value. Fortunately, these are automatic and we only need to provide an ID field.

Nickname and email fields are defined as string types, and their lengths have been specified, which saves database storage space.

The role field is defined as an integer type, which we use to identify whether users are admins or other types.

Now that we have defined the structure of the Users table, the next conversion to coding will be fairly straightforward (fileapp/models.py):

From app Import db Role_user = 0role_admin = 1 class USER (db). Model):  ID = db. Column (db. Integer, Primary_key = True)  nickname = db. Column (db. String (+), index = true, unique = True) e-  mail = db. Column (db. String (+), index = true, unique = true)  role = db. Column (db. Smallinteger, default = Role_user)   def __repr__ (self):    return '
  % (self.nickname)

The user class defines the few fields we have just created as class variables. The field uses DB. The column class creates an instance, the type of the field as a parameter, and some other optional parameters are provided. For example, parameters that identify field uniqueness and indexes.

The __repr__ method tells Python how to print a class object so that we can use it for debugging purposes.

Create a database

Put the configuration and model in the correct directory location, and now we create the database file. The Sqlalchemy-migrate package comes with command-line tools and APIs to create a database that can be easily updated later. But I think it's a bit awkward to use this command-line tool, so I wrote a Python script myself to invoke the migrated APIs.

Here's a script to create a database (filedb_create.py):

#!flask/bin/pythonfrom migrate.versioning Import apifrom config Import sqlalchemy_database_urifrom config import Sqlalchemy_migrate_repofrom app import Dbimport os.pathdb.create_all () if not os.path.exists (Sqlalchemy_migrate_repo) :  api.create (Sqlalchemy_migrate_repo, ' Database repository ')  Api.version_control (Sqlalchemy_database_uri , Sqlalchemy_migrate_repo) Else:  Api.version_control (Sqlalchemy_database_uri, Sqlalchemy_migrate_repo, Api.version (Sqlalchemy_migrate_repo))

Note that this script is completely generic, and all application path names are read from the configuration file. When you use your own project, you can copy the script to your app's directory to use it normally.

To create a database you only need to run one of the following commands (note that Windows is slightly different):


After you run this command, you create a new app.db file. This is an empty SQLite database that supports migration, and it also generates a db_repository directory with several files, which is where Sqlalchemy-migrate stores the database files, and note that if the database already exists it will no longer regenerate. This will help us to automatically create it again after we lose the existing database:

First time migration

Now that we've defined the model and it's connected to the database, let's start with a first attempt to change the application database structure, which will help us transform from an empty database into a database that can store the users information.

Do a migration I use another Python Little Helper script (filedb_migrate.py):

#!flask/bin/pythonimport impfrom migrate.versioning import apifrom app Import dbfrom config import sqlalchemy_database_ Urifrom Config Import sqlalchemy_migrate_repomigration = Sqlalchemy_migrate_repo + '/versions/%03d_migration.py '% ( Api.db_version (Sqlalchemy_database_uri, Sqlalchemy_migrate_repo) + 1) tmp_module = Imp.new_module (' Old_model ') old_ Model = Api.create_model (Sqlalchemy_database_uri, Sqlalchemy_migrate_repo) exec Old_model in tmp_module.__dict__ Script = Api.make_update_script_for_model (Sqlalchemy_database_uri, Sqlalchemy_migrate_repo, Tmp_module.meta, Db.metadata) Open (migration, "WT"). Write (script) a = Api.upgrade (Sqlalchemy_database_uri, Sqlalchemy_migrate_repo) print ' New migration saved as ' + migrationprint ' current database version: ' + str (api.db_version (Sqlalchemy_database_uri , Sqlalchemy_migrate_repo))

This script looks very complex, but actually does not have many things to do. Sqlalchemy-migrate by comparing the structure of the database (read from the app.db file) and the models structure (from app/ models.py file read) to create a migration task, the difference between the two is recorded as a migration script in the migration library, and the migration script knows how to apply or revoke a migration, so it can easily upgrade or downgrade a database format.

Although I did not encounter any problems when I used the script above to automatically generate the migration, it was sometimes difficult to decide what changed the old and new formats of the database. To make it easier for sqlalchemy-migrate to determine the changes to the database, I never rename existing fields, limiting the addition of deleting models, fields, or modifying the type of an existing field. I always check that the generated migration scripts are correct.

Needless to say, you must do a backup before you attempt to migrate the database in case there is a problem. Do not run the first used script on a production database, run it on a development database first.

Go ahead and record our migration:


The script will print out the following information:

New migration saved as db_repository/versions/001_migration.py current database Version:1

This script information shows where the migration script is stored and the version number of the current database. The version number of the empty database is 0, and the version number becomes 1 when we import the users information.

Database Upgrades and rollbacks

Now you may wonder why we have to do extra work to make a database migration record.

Imagine that you have an application on the development machine and a copy of the application is running on the server.

For example, in the next version of your product, your models layer has been modified, such as adding a new table. Without migrating the files, you need to solve the problem of database format modification on both the development machine and the server, which will be a lot of work.

If you already have a database that supports migration, then when you publish a new version of the app to the production server, you just need to record the new migration record, copy the migration script to your production server, and then run a simple application to change the script. The database upgrade can be using the following Python script (filedb_upgrade.py):

#!flask/bin/pythonfrom migrate.versioning Import apifrom config Import sqlalchemy_database_urifrom config import Sqlalchemy_migrate_repoapi.upgrade (Sqlalchemy_database_uri, Sqlalchemy_migrate_repo) print ' Current DATABASE Version: ' + str (api.db_version (Sqlalchemy_database_uri, Sqlalchemy_migrate_repo))

When you run the above script, the database will be upgraded to the latest version, and the change information will be stored in the database through scripting.

Rolling back the database to the old format, which is not a common way, but just in case, Sqlalchemy-migrate is also very well supported (filedb_downgrade.py):

#!flask/bin/pythonfrom migrate.versioning Import apifrom config Import sqlalchemy_database_urifrom config import Sqlalchemy_migrate_repov = Api.db_version (Sqlalchemy_database_uri, Sqlalchemy_migrate_repo) Api.downgrade ( Sqlalchemy_database_uri, Sqlalchemy_migrate_repo, V-1) print ' current DATABASE version: ' + str (api.db_version ( Sqlalchemy_database_uri, Sqlalchemy_migrate_repo))

This script will roll back a version of the database, and you can roll back multiple versions in a way that runs multiple times.

Database Association

Relational databases are best at storing relationships between data. If a user writes a microblog, the user's information is stored in the users table and Weibo is stored in the post table. The most effective way to record who wrote the microblog is to establish a correlation between the two data.

Once the user and Weibo relationship table is established, we have two ways to use the query: the most trivial one is when you see a microblog, you want to know which user wrote it. The more complicated one is the reverse query, if you know a user, you want to know all the tweets he wrote. Flask-sqlalchemy will provide us with assistance in two ways of querying.

Let's extend the data to store the microblog information so that we can see the corresponding relationship. Let's go back to the database design tool we used to create a posts table:

The posts table contains a required ID, the content body of the microblog, and a timestamp. There's nothing new, but the user_id field is worth explaining.

We want to establish a link between the user and the microblogging they write, and this is done by adding a field containing the user ID to identify who wrote the microblog, which is called the foreign key. Our database design tool also shows a foreign key as a foreign key and ID field that points to a table connection. This association is called a one-to-many association, which means that a user can write multiple articles.

Let's modify the next models to respond to these changes (app/models.py):

From app Import db Role_user = 0role_admin = 1 class USER (db). Model):  ID = db. Column (db. Integer, Primary_key = True)  nickname = db. Column (db. String (+), unique = True) e  -mail = db. Column (db. String (+), unique = True)  role = db. Column (db. Smallinteger, default = Role_user)  posts = db.relationship (' Post ', backref = ' author ', lazy = ' dynamic ')   def __rep R__ (self):    return '
  % (Self.nickname) class Post (db. Model):  ID = db. Column (db. Integer, Primary_key = True)  BODY = db. Column (db. String (  timestamp) = db. Column (db. DateTime)  user_id = db. Column (db. Integer, Db. ForeignKey (' User.ID ')   def __repr__ (self):    return '
   % (self.body)

We have added a post class that represents a user-written microblog, and the user_id field is initialized in the post class as a foreign key, so Flask-sqlalchemy will know that the field will be associated with the user.

Note that we also added a new field to the user class named posts, which is defined as a db.relationship field that is not actually in the database, so it is not in our database diagram. For a one-to-many association, the Db.relationship field usually needs to be defined only on the side. Based on this link we can get a list of users ' tweets. The first parameter of Db.relationship represents the class name of the "many" party. The Backref parameter defines a field that refers to the object of the "many" class back to the "one" object, and for our part we can use Psot.author to get to the user instance to create a microblog. If you can't understand it, don't worry, we'll explain it in an example later in the article.

Let's record this change with another migration file. Simply run the following script:


After running the script, you will get the following output:

New migration saved as db_repository/versions/002_migration.py current database Version:2

We don't need to use a separate migration file every time to record small changes in the database model layer, a migration file is usually just a change of the release version. The next more important thing is that we need to understand how the migration system works.

Application Practice

We've spent a lot of time on database definitions, but we still don't see how he works because we don't have any data-related coding in our application, so we'll use our new database in the Python interpreter.

Go ahead and start Python. On Linux or OS X:

The code is as follows:


Under Windows:

The code is as follows:


When you enter the following information in the Python command line prompt:

>>> from app Import db, Models >>>

This way our database modules and models are loaded into memory.

Let's create a new user:

>>> U = models. User (nickname= ' John ', email= ' john@email.com ', role=models. Role_user) >>> db.session.add (U) >>> db.session.commit () >>>

In the same session environment to change the database, multiple modifications can be accumulated into a session and finally by invoking a db.session.commit () command to commit, the submission also guarantees atomicity. If an error occurs in the session, Db.session.rollback () is called to roll back the database to its previous state. If the call is neither committed nor rolled back, the system will roll back the session by default. Sessions (session) ensures data consistency in the database.

Let's add another user:

>>> U = models. User (nickname= ' Susan ', email= ' susan@email.com ', role=models. Role_user) >>> db.session.add (U) >>> db.session.commit () >>>

Now we can query the user information:

>>> users = models. User.query.all () >>> print users[
   ]>>> for u in Users: ...   Print U.id,u.nickname ... 1 john2 susan>>>

Here we use the query function, which can be used in all model classes. Notice how the ID is automatically generated.

There is another way to query, if we know the user's ID, we can use the following way to find the user information:

>>> U = models. User.query.get (1) >>> print u

Now let's add a Weibo message:

>>> import datetime>>> U = models. User.query.get (1) >>> p = models. Post (body= ' My first post! ', Timestamp=datetime.datetime.utcnow (), author=u) >>> Db.session.add (p) >> > Db.session.commit ()

This place we set the time to the UTC time zone, all the time stored in the database will be in UTC format, users may write Weibo around the world, so we need to use a uniform time unit. In a later tutorial we will learn how to use these times in the user's local time zone.

You may have noticed that we did not set the USER_ID field in the Post class, but instead stored the user object in the Author field. The Auhtor field is a virtual field added by Flask-sqlalchemy to establish an association, and we have defined the name previously, referring to the Backref parameter in the db.relationship in model. With this information, the ORM layer will know how to get to the user_id.

To complete this session, let's look at more database queries that can be made:

# get all posts from a user>>> U = models. User.query.get (1) >>> print u
  >>> posts = u.posts.all () >>> print posts[
   ] # Obtain author of each post>>> for P in posts: ...   Print P.id,p.author.nickname,p.body ... 1 John My first post! # A user that has no posts>>> U = models. User.query.get (2) >>> print u
    >>> print U.posts.all () [] # Get all users in reverse alphabet ical order>>> print models. User.query.order_by (' nickname Desc '). All () [

The best way to learn more about database query options is to look at Flask-sqlalchemy's documentation.

Before we end the session, we delete the previously created test user and the article, and we can start with a clean database in the next section:

>>> users = models. User.query.all () >>> for u in Users: ...   Db.session.delete (u) ...>>> posts = models. Post.query.all () >>> for P in posts:   ... Db.session.delete (P) ...>>> db.session.commit () >>>


This lengthy beginner, we learned the basic operation of the database, but we have not yet associated the database to the program. In the next section, we'll practice the database operations we've learned by logging in to the user system.

At the same time, if you have not started to write the program, you need to download the current file microblog-0.4.zip. Note that the zip file does not include the database, but there is already a storage script. Use db_create.py to create a new database and use db_upgrade.py to upgrade your database to the latest version.

  • Related Article

    Contact Us

    The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

    If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

    A Free Trial That Lets You Build Big!

    Start building with 50+ products and up to 12 months usage for Elastic Compute Service

    • Sales Support

      1 on 1 presale consultation

    • After-Sales Support

      24/7 Technical Support 6 Free Tickets per Quarter Faster Response

    • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.