Python under the SQLAlchemy module using the detailed

Source: Internet
Author: User
Tags autoload commit datetime odbc mssql rollback sqlite mysql database

SQLAlchemy Introduction

SQLAlchemy is an open-source software in the Python programming language. Provides a SQL Toolkit and object-relational mapping (ORM) tool that uses the MIT license release.

SQLAlchemy "Implements a complete enterprise-class persistence model for efficient and High-performance database access design using a simple Python language." The idea of sqlalchemy is that the scale and performance of SQL databases are important to the collection of objects, while the abstraction of object collections is important for tables and rows. Therefore, Sqlalchmey uses a data mapping model similar to the Java Hibernate, rather than an active record model used by other ORM frameworks. However, optional plug-ins such as elixir and declarative allow users to use declarative syntax.

There are 3 different ways to use SQLAlchemy:

Mode 1, using raw SQL;
Mode 2, using the SQLAlchemy SQL expression;
Mode 3, using ORM.

The first two ways can be collectively referred to as the core approach. This article explains the core approach to accessing the database without ORM.

For most applications, it is recommended to use SqlAlchemy. Even with raw SQL, SqlAlchemy can offer the following benefits:
1. Built-in database connection pool. [note] If it is sqlalchemy+cx_oracle, you need to ban connection pool, otherwise there will be abnormalities. Method is to set Sqlalchemy.poolclass to Sqlalchemy.pool.NullPool
2. Powerful log function
3. Database neutrality, including: SQL parameter writing, limit syntax
4. In particular, the ==your_value of the Where () condition, if Your_value equals none, the real SQL is converted to is none

Comparison of SQLAlchemy SQL expression and raw sql:
1. SQL expression is a pure Python code that is better read, especially when the insert () method is used, and the field name and value appear in pairs.
2. Raw SQL is more flexible than SQL expression, and if SQL/DDL is complex, raw SQL has an advantage.


SQLAlchemy Basic Operations

I. Installation of SQLAlchemy

This article is a MySQL case, so need a computer with MySQL database installed
Use Python's pip3 to install PIP3 install SQLAlchemy
View version information when you are finished installing

Import SQLAlchemy
sqlalchemy.__version__

Second, the connection database

In SQLAlchemy, sessions are used to create a session between a program and a database. The load and save of all objects need to be passed through the session object.

From SQLAlchemy import Create_engine
From Sqlalchemy.orm import Sessionmaker

# Linked database using PYMYSQ module to do mapping, the following parameter is the maximum number of connections 5
Engine=create_engine ("Mysql+pymysql://root@127.0.0.1:3306/digchouti?charset=utf8", max_overflow=5)
Session = Sessionmaker (Bind=engine)

Session = Session ()

Iii. Creating mappings (creating Tables)

A mapping corresponds to a Python class that represents the structure of a table. The following creates a person table, including ID and name two fields. That is to say, creating a table is using Python's classes to implement

Import SQLAlchemy
From SQLAlchemy import Create_engine
From sqlalchemy.ext.declarative import declarative_base
From SQLAlchemy import Column, Integer, String
From Sqlalchemy.orm import Sessionmaker

Engine=create_engine ("Mysql+pymysql://root@127.0.0.1:3306/digchouti?charset=utf8", max_overflow=5)

#生成一个SQLORM基类, create a table you must inherit him, don't ask me what it means.
Base = Declarative_base ()

Class Person (Base):
__tablename__ = ' UserInfo '

id = Column (Integer, Primary_key=true)
Name = Column (String (32))

def __repr__ (self):
Return "<person (name= '%s ') >"% self.name
This code creates a table named UserInfo with two columns, one with ID and one column name.

Iv. Add Data

Of course, we created the table, and certainly to add the data, the code is as follows:

#创建一个person对象
person = person (name= ' Zhang Forest ')
#添加person对象, but still not committed to the database
Session.add (person)
#提交数据库
Session.commit ()
Of course, you can add more than one data:

Session.add_all ([
Person (name= ' Zhang Forest '),
Person (name= ' Aylin ')
])
Session.commit ()

V. Finding data

In the SQLAlchemy module, find the data to provide a method for query () I'll give you a list of what I can use:

#获取所有数据
Session.query (person).

#获取name = The row of ' Zhang Forest ' data
Session.query. Filter (person.name== ' Zhang Forest '). One ()

#获取返回数据的第一行
Session.query (person).

#查找id大于1的所有数据
Session.query (Person.name). Filter (person.id>1). All ()

#limit索引取出第一二行数据
Session.query (person). All () [1:3]

#order by, sorted by ID from large to small
Session.query (person). Ordre_by (Person.id)

#equal/like/in
query = Session.query (person)
Query.filter (person.id==1). All ()
Query.filter (person.id!=1). All ()
Query.filter (Person.name.like ('%ay% ')). All ()
Query.filter (Person.id.in_ ([1,2,3])). All ()
Query.filter (~person.id.in_ ([1,2,3])). All ()
Query.filter (Person.name==none). All ()

#and or
From SQLAlchemy import And_
From SQLAlchemy import or_
Query.filter (And_ (person.id==1, person.name== ' Zhang Forest ')). All ()
Query.filter (person.id==1, person.name== ' Zhang Forest '). All ()
Query.filter (person.id==1). Filter (person.name== ' Zhang Forest '). All ()
Query.filter (Or_ (Person.id==1, person.id==2)). All ()

# count Counts
Session.query (person). COUNT ()

# Modify Update
Session.query. Filter (ID > 2). Update ({' name ': ' Zhang Forest '})
Query this piece more, may write not all also hope you forgive me, the rest I believe we can expand

The above introduction is over, maybe you can not be a fusion to go, the following I give you a fusion write in a bar:

From sqlalchemy.ext.declarative import declarative_base
From SQLAlchemy import Column
From SQLAlchemy import Integer, String, TIMESTAMP
From SQLAlchemy import ForeignKey, UniqueConstraint, Index
From Sqlalchemy.orm import Sessionmaker, relationship
From SQLAlchemy import Create_engine

Engine=create_engine ("Mysql+pymysql://root@127.0.0.1:3306/digchouti?charset=utf8", max_overflow=5)

Base = Declarative_base ()

Class Person (Base):
__tablename__ = ' UserInfo '

id = Column (Integer, Primary_key=true)
Name = Column (String (32))

def __repr__ (self):
Return "<person (name= '%s ') >"% self.name

#创建连接数据库以供提交用, this table is created and can be viewed in the database
Base.metadata.create_all (ENGINE)
Session = Sessionmaker (Bind=engine)
# Insert more data into it
Session = Session ()
Session.add_all ([
Person (name= ' Zhang Forest '),
Person (name= ' very handsome ')
])
Session.commit ()
Advanced usage of SQLAlchemy table relation

Described above is for the operation of a table, the following will be said is the table relationship between the One-to-many, Many-to-many, understand the database are aware of foreign keys, that is, table relationship established.

1, a pair of multiple foreign keys (1)

The first method we use only ordinary operations, this way is good understanding, after the first table is created, insert the data and then remember to submit the data, and then to the second chapter of the table to create data, you can directly take the first associated data, the code is as follows:

#!/usr/bin/env python
#-*-Coding:utf-8-*-

From sqlalchemy.ext.declarative import declarative_base
From SQLAlchemy import column,integer,foreignkey,uniqueconstraint,index,string
From Sqlalchemy.orm import sessionmaker,relationship
From SQLAlchemy import Create_engine


Engine=create_engine (' mysql+pymysql://root@127.0.0.1:3306/db1 ')

Base = Declarative_base ()

Class Son (Base):
__tablename__ = ' son '
id = Column (integer,primary_key=true)
Name = Column (String (32))
Age = Column (String (32))
# Create a foreign key, corresponding to the ID of the Father's table
father_id = Column (Integer,foreignkey (' father.id '))

Class Father (Base):
__tablename__ = ' father '
id = Column (integer,primary_key=true)
Name = Column (String (32))
Age = Column (String (32))

Base.metadata.create_all (Engine)
Session = Sessionmaker (Bind=engine)
Session = Session ()

F1 = Father (name = ' Zhangyanlin ', age = ' 18 ')
Session.add (F1)
Session.commit ()

W1 = Son (name = ' Xiaozhang1 ', age = 3,father_id = 1)
W2 = Son (name = ' Xiaozhang2 ', age = 3,father_id = 1)

Session.add_all ([W1,W2])
Session.commit ()
2, a pair of multiple foreign keys (2) Relationship

The second method is the same as the first, except that the relationship is used to do the foreign key relationship, the code is as follows:

#!/usr/bin/env python
#-*-Coding:utf-8-*-

From sqlalchemy.ext.declarative import declarative_base
From SQLAlchemy import column,integer,foreignkey,uniqueconstraint,index,string
From Sqlalchemy.orm import sessionmaker,relationship
From SQLAlchemy import Create_engine


Engine = Create_engine (' mysql+pymysql://root@127.0.0.1:3306/db1 ')

Base = Declarative_base ()

Class Son (Base):
__tablename__ = ' son '
id = Column (integer,primary_key=true)
Name = Column (String (32))
Age = Column (String (32))

father_id = Column (Integer,foreignkey (' father.id '))

Class Father (Base):
__tablename__ = ' father '
id = Column (integer,primary_key=true)
Name = Column (String (32))
Age = Column (String (32))
son = relationship (' son ')

Base.metadata.create_all (Engine)
Session = Sessionmaker (Bind=engine)
Session = Session ()

F1 = Father (name = ' Zhangyanlin ', age = ' 18 ')

W1 = Son (name = ' Xiaozhang1 ', age = ' 3 ')
W2 = Son (name = ' Xiaozhang2 ', age = ' 4 ')
# The point is, here's the binding relationship
F1.son = [W1,W2]
# Just send the father in, and the son's nature is uploaded.
Session.add (F1)
Session.commit ()


Add

Common database connection Strings
==============================
#sqlite
sqlite_db = Create_engine (' sqlite:////absolute/ Path/database.db3 ')
sqlite_db = Create_engine (' sqlite://')   # in-memory database
sqlite_db = Create_engine (' Sqlite:///:memory: ') # In-memory database
# PostgreSQL
pg_db = Create_engine (' Postgres://scott: Tiger@localhost/mydatabase ')
# mysql
mysql_db = create_engine (' mysql://scott:tiger@localhost/mydatabase ')
# Oracle
oracle_db = create_engine (' oracle://scott:tiger@127.0.0.1:1521/sidname ')
# Oracle via TNS name
oracle_db = Create_engine (' oracle://scott:tiger@tnsname ')
# MSSQL using ODBC datasource names.  Pyodbc is the Def Ault driver.
mssql_db = create_engine (' mssql://mydsn ')
mssql_db = Create_engine (' Mssql://scott:tiger@mydsn ')
# Firebird
firebird_db = create_engine (' FIREBIRD://SCOTT:TIGER@LOCALHOST/SOMETEST.GDM ')

==============================
Questions about the lack of DB API interfaces for some Non-mainstream databases
==============================
For example, Teradata, there is no specific DB API implementation, but the ODBC driver will certainly provide, otherwise it will not be mixed in the river. PYPYODBC + ODBC driver should be an option. The PYPYODBC is consistent with the Pyodbc interface, and it is a pure Python implementation that should theoretically support Python/ironpython/jython. In addition, I suspect SQLAlchemy should be able to access all databases based on this combination for verification.
PYPYODBC Home: http://code.google.com/p/pypyodbc/


==============================
#connnectionless执行和connnection执行
==============================
1. The way to execute SQL directly using engine is called connnectionless execution,
2. Use Engine.connect () to get conn, then execute SQL via conn, called Connection execution
If you want to perform in transaction mode, it is recommended that you use the connection method, and if you do not involve transaction, the two methods are the same.

==============================


#sqlalchemy推荐使用text () function encapsulates the SQL string


==============================


The benefits are huge:


1. Different databases, you can use a unified SQL parameter to pass the notation. The parameters must be drawn by: number. When execute () is invoked, the arguments are passed in using the DICT structure.


From SQLAlchemy Import text


result = Db.execute (Text (' SELECT * FROM table where ID &lt;: ID and Typename=:type '), {' id ': 2, ' type ': ' user_table '})


2. If you do not specify the type of parameter, the default is string type; If you want to pass the date parameter, you need to use the Bindparams parameter of text () to declare


From SQLAlchemy import DateTime


Date_param=datetime.today () +timedelta (DAYS=-1*10)


Sql= "Delete from Caw_job_alarm_log where Alarm_time&lt;:alarm_time_param"


T=text (SQL, Bindparams=[bindparam (' Alarm_time_param ', Type_=datetime, Required=true)]


Db.execute (t,{"Alarm_time_param": Date_param})





Parameter Bindparam you can use Type_ to specify the type of the parameter, or you can use the initial value to specify the parameter type


Bindparam (' Alarm_time_param ', type_=datetime) #直接指定参数类型


Bindparam (' Alarm_time_param ', DateTime ()) #使用初始值指定参数类型


3. If you want to convert the data type in the result of the query, you can specify it by using the parameter Typemap parameter of text (). This is more flexible than mybatis,


t = text ("SELECT ID, name from users",


typemap={


' ID ': Integer,


' Name ': Unicode


}


)


4. There are other details, as detailed in sqlalchemy\sql\expression.py in the docstring.


==============================
#sqlalchemy访问数据库的示例
==============================
#-----------------------------------
#获取数据库
#-----------------------------------
From SQLAlchemy import Create_engine
Db=create_engine ("Sqlite:///:memory:", Echo=true)


#-----------------------------------
#DDL
#-----------------------------------
Db.execute ("CREATE TABLE Users" (userid char (), username char (50))


#-----------------------------------
#DML
#-----------------------------------
resultproxy= Db.execute ("INSERT into users (userid,username) VALUES (' User1 ', ' Tony ')")
resultproxy.rowcount  #return rows Affected by an UPDATE or DELETE statement


#-----------------------------------
#Query
#-----------------------------------
Resultproxy=db.execute ("SELECT * from users")
Resultproxy.close (), after resultproxy, need close
Resultproxy.scalar (), can return the value of a scalar query
The Resultproxy class is the encapsulation of the cursor class (in file sqlalchemy\engine\base.py),
The Resultproxy class has a property cursor that corresponds to the original cursor.
There are many methods for the Resultproxy class that correspond to the methods of the cursor class, while others extend some properties/methods.
Resultproxy.fetchall ()
Resultproxy.fetchmany ()
Resultproxy.fetchone ()
Resultproxy.first ()
Resultproxy.scalar ()
Resultproxy.returns_rows #True If this resultproxy returns rows.
Resultproxy.rowcount #return rows affected by an UPDATE or DELETE statement. It isn't intended to provide the number of rows present from a SELECT.

When you traverse Resultproxy, each row you get is a Rowproxy object, and the method to get the field is very flexible, with subscript and field names and even properties. Rowproxy[0] = = rowproxy[' id ' = = = Rowproxy.id, see Rowproxy already have basic POJO class characteristics.


#-----------------------------------
#使用transaction
#-----------------------------------
#SqlAlchemy支持支持事务, even transactions can be nested. The default transaction is Autocommit, which is automatically committed by executing a single SQL.

#-If you control transactions more precisely, the easiest way is to use connection and then get transaction objects by connection
Connection = Db.connect ()
trans = Connection.begin ()
Try
DoSomething (Connection)
Trans.commit ()
Except
Trans.rollback ()

#-There is also a way to specify the strategy= ' threadlocal ' parameter when creating engine, which automatically creates a thread-local connection, which is automatically used for subsequent connectionless execution, so that when a transaction is processed, it is used whenever the engine object to manipulate the transaction. Such as:
#参见 http://hi.baidu.com/limodou/blog/item/83f4b2194e94604043a9ad9c.html
db = Create_engine (connection, strategy= ' threadlocal ')
Db.begin ()
Try
DoSomething ()
Except
Db.rollback ()
Else
Db.commit ()

#-The default transaction is Autocommit, which is automatically committed by executing a single SQL. can also be modified to manual commit mode on connection and statement via the Execution_options () method
Conn.execution_options (Autocommit=false)
When set to manual commit mode, to commit, you need to call Conn.commit ()

#-----------------------------------
#如何使用 pydbrowfactory
#-----------------------------------
#pyDbRowFactory是我开发的一个通用RowFactory, you can bind cursor and your model Pojo class, the new version of Pydbrowfactoryresultproxy. The following example is the most basic use of pydbrowfactory
#方法1, using the Cursor object
Cursor=resultproxy.cursor
From pydbrowfactory import dbrowfactory
Rowfactory=dbrowfactory (Cursor, "Your_module.your_row_class")
Lst=factory.fetchallrowobjects ()

#方法2, direct use of resultproxy
From pydbrowfactory import dbrowfactory
Factory=dbrowfactory.fromsqlalchemyresultproxy (Resultproxy, "Your_module.your_row_class")
Lst=factory.fetchallrowobjects ()

As I said before, SQLAlchemy uses Resultproxy to encapsulate cursor, and each row of Resultproxy is a Rowproxy class object. Rowproxy is easy to use, for query select UserName from users,
Each line result can be accessed using Rowproxy, which is quite flexible. rowproxy.username=rowproxy["UserName"]==rowproxy[0], so with rowproxy, many times, there is no need to create a model Pojo class for each table.

#-----------------------------------
#连接池
#-----------------------------------
SQLAlchemy The default connection pool algorithm selection rules are:
1. Connect the SQLite in memory, the default connection pool algorithm is Singletonthreadpool class, that is, each thread allows one connection
2. Connection file-based SQLite, the default connection pool algorithm is Nullpool class, that is, no connection pool
3. For other cases, the default connection pool algorithm is the Queuepool class
Of course, we can also implement our own connection pool algorithm,
db = Create_engine (' sqlite:///file.db ', Poolclass=yourpoolclass)
The parameters associated with the Create_engine () function and connection pool are:
-pool_recycle, the default is-1, the recommended setting is 7200, that is, if connection is idle for 7,200 seconds, automatically regain to prevent connection from being shut down by DB server.
-pool_size=5, connection number size, default is 5, the official environment this value is too small, need to adjust according to the actual situation
-max_overflow=10, the maximum number of connections allowed after Pool_size is exceeded, the default is 10, and these 10 connections are not placed in the pool but are actually closed after they have been used.
-pool_timeout=30, gets the time-out threshold for the connection, defaults to 30 seconds

#-----------------------------------
#log输出
#-----------------------------------
--If you need to sys.stdout output, you don't need to refer to the logging module to implement
db = Create_engine (' sqlite:///file.db ', echo=true)

--If you want to output in a file, the log file does not have rotate functionality and is not recommended for use in a production environment.
Import logging
Logging.getlogger (' Sqlalchemy.engine '). Setlevel (Logging.info)


#-----------------------------------
Best practices for #使用 SQLAlchemy core
#-----------------------------------
I do not like the use of ORM, mainly ORM learning cost is high, in addition, the construction of complex queries is also more difficult. More often, the raw SQL and SQL expression methods are used.
1. Declarative is a new extension of sqlalchemy that can only be used in ORM and cannot be used in SQL expression
2. If you want to use ORM, the table must have a primary key; With raw SQL and SQL expression, this constraint is not available.

Usage Experience:
1. Query whether or not complex, direct use of raw SQL; Additions and deletions are a single table operation, the use of SQL expression is sufficient.
2. Specifically, for additions and deletions, such as a user class, can contain a fixed _table members, _table=table (' Users ', metadata, autoload=true), and additions and deletions directly using the _table object to complete.
3. For queries, if the result set can be mapped to an entity object, use Pydbrowfactory to complete the object instantiation. If the result set involves multiple entities, using resultproxy directly, each row of Resultproxy has basic object characteristics, and in most cases it is not necessary to map specifically to a particular class.
4. The relationship between the tables, such as: the Users table and Addresses table, there is 1:n relationship, corresponding to the user class will also have a AddressList member, after the entity of a user object, we can immediately query addresses table, get the user Address List, 2 steps to complete this 1:n relationship mapping.




The use of SQLAlchemy is too flexible, the following is just a kind of writing I like, only from the layout, it is pretty.


Build INSERT statement: _table.insert (). VALUES (F1=value1,f2=value2,)


Build UPDATE statement: _table.update (). VALUES (F1=newvalue1,f2=newvalue2). where (_table.c.f1==value1). WHERE (_table.c.f2== value2)


Build DELETE statement: _table.delete (). where (_table.c.f1==value1). where (_table.c.f2==value2)


Batch insert/update/delete, each row of data into a dict, and then the dict into a list, and _table.insert ()/update ()/delete () together as parameters passed to Conn.execute () .


Conn.execute (_table.insert (), [


{' user_id ': 1, ' email_address ': ' jack@yahoo.com '},


{' user_id ': 1, ' email_address ': ' jack@msn.com '},


{' user_id ': 2, ' email_address ': ' www@www.org '},


{' user_id ': 2, ' email_address ': ' wendy@aol.com '},


])


SQL expression can also use bindparam like the text function of raw SQL by declaring parameters when invoking insert ()/update ()/delete (), and then at Conn.execute () execution, Pass the argument in.


D=_table.delete (). where (_table.c.hiredate&lt;=bindparam ("Hire_day", DateTime (), required=true))


Conn.execute (d, {"Hire_day":d Atetime.today ()})


The

 
where () and the filter () in ORM accept the same parameters, which are supported by various SQL conditions.
#equals:
WHERE (_table.c.name = = ' ed ')
#not equals:
Where (_table.c.name!= ' Ed ')
#LIKE:
Where (_ta Ble.c.name.like ('%ed% ')
#IN:
Where (_table.c.name.in_ ([' Ed ', ' Wendy ', ' Jack '))
#NOT in:
Where (~_ Table.c.name.in_ ([' Ed ', ' Wendy ', ' Jack '])
#IS NULL:
where (_table.c.name = None)
#IS not NULL:
Where (_ta Ble.c.name!= None)
#AND:
from sqlalchemy import And_
where and_ (_table.c.name = ' ed ', _table.c.fullname = = ' Ed Jones ')
#AND也可以通过多次调用where () to implement
where (_table.c.name = ' Ed '). where (_table.c.fullname = = ' ed Jones ')
#OR :
from SQLAlchemy import or_
where (or_ (_table.c.name = = ' ed ', _table.c.name = ' Wendy '))
#match: the contents Of the match parameter are database backend specific.
Where (_table.c.name.match (' Wendy '))


--==========================
--python file:mydatabase.py
--==========================
From SQLAlchemy import Create_engine
From Sqlalchemy.schema import MetaData

#db = Create_engine (' sqlite:///:memory: ', echo=true)
db = Create_engine (' sqlite:///c://caw.sqlite.db ', echo=true)
metadata = metadata (BIND=DB)




--==========================


--python file:dal.py


--==========================


From sqlalchemy.sql.expression import text, Bindparam


From Sqlalchemy.sql import Select,insert, delete, update


From Sqlalchemy.schema import Table





From MyDatabase import Db,metadata


From pydbrowfactory import dbrowfactory





Class Caw_job (object):


Full_name= "Dal.caw_job"


Tablename= "Caw_job"


_table=table (tablename, metadata, Autoload=true)





def __init__ (self):


Self.app_domain =none


Self.job_code =none


Self.job_group =none


Self.cron_year =none


Self.cron_month =none


Self.cron_day =none


Self.cron_week =none


Self.cron_day_of_week =none


Self.cron_hour =none


Self.cron_minute =none


Self.description =none





@classmethod


Def getentity (CLS, App_domain, Jobcode):


Sql= "SELECT * from Caw_job where app_domain=:app_domain and Job_code=:job_code";


Resultproxy=db.execute (text (sql), {' App_domain ': App_domain,


' Job_code ': Jobcode})


Dbrowfactory.fromsqlalchemyresultproxy (Resultproxy, CLS. Full_name)


Return Dbrowfactory.fetchonerowobject ()








def insert (self):


I=self._table.insert (). VALUES (


App_domain =self.app_domain,


Job_code =self.job_code,


Job_group =self.job_group,


Cron_year =self.cron_year,


Cron_month =self.cron_month,


Cron_day =self.cron_day,


Cron_week =self.cron_week,


Cron_day_of_week=self.cron_day_of_week,


Cron_hour =self.cron_hour,


Cron_minute =self.cron_minute,


Description =self.description,


)


Db.execute (i)





def update (self):


U=self._table.update (). VALUES (


App_domain =self.app_domain,


Job_code =self.job_code,


Job_group =self.job_group,


Cron_year =self.cron_year,


Cron_month =self.cron_month,


Cron_day =self.cron_day,


Cron_week =self.cron_week,


Cron_day_of_week =self.cron_day_of_week,


Cron_hour =self.cron_hour,


Cron_minute =self.cron_minute,


Description =self.description,


). where (Self._table.c.app_domain==self.app_domain) \


. where (Self._table.c.job_code==self.job_code)


Db.execute (U)








def delete (self):


D=self._table.delete (). where (Self._table.c.app_domain==self.app_domain) \


. where (Self._table.c.job_code==self.job_code)


Db.execute (d)


#-----------------------------------
#使用sqlalchemy. ext.declarative to generate tables, all tables must have primary keys.
#在系统初期, data models often need to be adjusted frequently, and it is easier to modify the table structure in this way.
#-----------------------------------
--python file:models.py
From SQLAlchemy import Create_engine
From sqlalchemy.ext.declarative import declarative_base
From SQLAlchemy import Column, Integer, String, Boolean, DateTime, Float

Engine = Create_engine (' sqlite:///:memory: ', echo=true)
Base = Declarative_base ()

#  Ddl_caw_job is specifically used to build database objects, no other use    
Class Ddl_caw_job (Base):
    __ tablename__= "Caw_job"
    job_name   =column (String, primary_key=true)
     job_group  =column (String)
       
def init_db ():
    Base.metadata.create_all (bind=engine,)    

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.