1. Index:
Dictionary to get the directory, easy to find data.
Principle: Store column information in its associated files, which are stored in an easy-to-retrieve manner such as B-tree, hash
Classification of indexes:
Normal all: Name, can only help find
Unique index: Name, help lookup, constraint content cannot be duplicated, NULL,
You can also make a federated unique index
Primary key index: similar to a unique index, but does not allow null, a table can have only one primary key
Support for Federated primary keys
Composite index: Multi-column Public component index
Normal multi-column index (name,email): not very useful
Federated Unique Index (NAME,EMAIL): useful
Full-text index: Similar to the long field did a word breaker operation, the Chinese support is not good.
Frequently used: SOLR, Lucence,sphix to do full-text search
2. Create an index:
* Normal index: Faster search speed
Verify: Use command explain if type is all for full table search, if ref means search by index
* Unique index: keyword unique
Type =const, find high efficiency
* Federated Index: Index leftmost principle,
Using the right side alone does not enable indexing
3, the correct use of the index:
@*like '%xx ' do not walk index like ' xx% ' walk index
@* Use function: Select Coun (1) from TB2 where reverse (name) = ' xxx ' does not go index, select Coun (1) from TB2 WHERE name = reverse (' xxx ') go Index
@*or:select * from tb1 where nid = 1 or email = ' xxxx ', nid email has index to go index
@* type inconsistency: SELECT * from tb1 WHERE name = ' xxxx ' will go index, select * from TB1 WHERE name = 111 do not go index
@*! = and >: Do not go index, special if the column is the primary key to go index, otherwise do not go index;;; Other comparison of walking index
@*orderby:select email from tb1 ORDER BY name Desc does not go index; Select Name Form tb1 ORDER BY name Desc Walk index
@* Combination index leftmost index, eg (name,email) name and email walk index, name Walk index, email not go index
Overwrite index: All data is called Overwrite index
Index Merge: 2 single indexes used together called Index merges
Combined Index: 2 columns made into an index
4, small knowledge: After adding the limit can be in the absence of an index when the query may be completed quickly
5, MySQL use attention to achieve:
? Avoid using SELECT *
? COUNT (1) or count (column) instead of COUNT (*)
? Try to use char instead of varchar when creating a table
? Fields with a fixed length of field order are preferred
? Composite index instead of multiple single-column indexes (when multiple criteria queries are used)
? Use a short index as much as possible (otherwise it will result in a very large index file, the column name length when the index is created)
? Use connection (join) instead of subquery (sub-quries)
? Pay attention to the consistency of the condition type when connecting tables
? Index hash value (few repetitions) do not index, example: gender unsuitable
? Create indexes moderately and do not over-create
? When importing and exporting data, you can delete the index and then add indexes to improve efficiency
6, the best solution for paging:
SELECT * from TB limit 200000, 5;
The first optimization scenario:
SELECT * from TB1 where nid > (select nid from Tb1 limit 200000,1) Limit 5
This scenario is inefficient
The second optimization scenario:
Show 10 data per page for 1000 articles
A. Previous page, next page (using the NID value passed by the app)
SELECT * from TB1 where Nid <9989 order by nid desc limit 10;
B. Prev, 100 99 98 97 96 95 94 Next
SELECT * from TB1 where Nid <9989 order by nid desc limit 10;
#9989 ... 9959
Select Nid form (select Nid from Tb1 where Nid < 9989 order BY nid desc limit) as A order by nid ASC Limit 1;
SELECT * from TB1 where nid < dynamic latest ID ORDER by nid desc limit 10;
7. Implementation plan:
Syntax: explain + querying SQL
ID indicates the number of queries, the first execution of a large ID
8, slow query:
* Configuration file (Win): Win My-default.ini
Slow_query_log = Off
Long_query_time = 2
Slow_query_log_file =/xxxxx/log
Log_queries_not_using_indexes = off-records queries that do not use an index
To view current configuration information:
Show GLOBAL VARIABLES like '%query% '
Set up:
Set global slow_query_log = on;
Set global long_query_time = 1;
Set Global log_queries_not_using_indexes = On
* View MySQL Slow log
Eg:mysqldumpslow
-S at need of our attention
R reverse order a shows all values G matches the results of the regular match L total time does not show lockout time
9, SQLAlchemy
SQLAlchemy is an ORM framework under Python
SQLAlchemy itself can not operate the database, must rely on the Pymysql and other third plug-ins, dialect have and data API to communicate, depending on the configuration file to call different data API to achieve the operation of the database.
1) Bottom Processing
eg
MySQL-Python
mysql+mysqldb://<
user
>:<
password
>@ pymysql mysql+pymysql://<username>:<password>@ MySQL-Connector mysql+mysqlconnector://<user>:<password>@ cx_Oracle oracle+cx_oracle://user:[email protected]:port/dbname[?key=value&key=value...] 更多详见:http://docs.sqlalchemy.org/en/latest/dialects/index.html#!/usr/bin/env python#-*-coding:utf-8-*-ImportPymysql#Create a connectionconn = Pymysql.connect (host='10.10.8.12', port=3306, user='sysadmin', passwd='password01!', db='Q2')#Creating Cursorscursor = Conn.cursor (cursor=pymysql.cursors.DictCursor)#executes SQL and returns the number of affected rows#Effect_row = Cursor.execute ("UPDATE users set name = ' John1 '")Effect_row= Cursor.execute ("INSERT into Users (Name,password) VALUES (' john33 ', ' 123.123 ')") Conn.commit ()#Cursor.lastrowid can get the ID of the self-increment columnPrint(Cursor.lastrowid)#u = input ("pls input username>>>>")#p = input ("pls input passwd>>>>>>")#Effect_row = Cursor.execute ("INSERT into Users (Name,password) VALUES (%s,%s)", (u,p))#To Insert data in bulk:#Effect_row = Cursor.executemany ("INSERT into Users (Name,password) VALUES (%s,%s)",#[(' John3 ', ' john3333 '), (' John5 ', ' john555 ')])#print (Effect_row)#Commit, or you cannot save the newly created or modified data#Conn.commit ()#Querying Data## Effect_row = Cursor.execute ("SELECT * from users")#Effect_row = Cursor.execute (' select * from the users where nid >%s ORDER by Nid DESC ', (5,))##fetchall---Get all the data, Fetchone first execution gets the first data, second execution gets second data,##cursor. Scroll (2, mode= ' relative ') can be used to move the cursor### result = Cursor.fetchall ()## Print (Result)#result = Cursor.fetchone ()#print (Result)#result = Cursor.fetchone ()#print (Result)##游标的移动, mode= "relative" relative movement, mode= ' absolute ' absolute movement## Cursor.scroll ( -1, mode= ' relative ')## Cursor.scroll (2, mode= ' relative ')#cursor.scroll (0, mode= ' absolute ')#result = Cursor.fetchone ()#print (Result)#Close Cursorscursor.close ()#Close ConnectionConn.close ()Coding DemoDAY39---MySQL basic three