Introduced by Chen Min's blog] MySQL is increasingly used in database applications due to its small size and efficient operations. When I was developing a P2P application, I used MySQL to store P2P nodes. Because P2P applications have tens of thousands knots, and nodes change frequently, therefore, you must ensure efficient query and insertion. The following are three effective attempts to improve efficiency during use.
Use statement for binding Query
You can use statement to build a query syntax tree in advance. You do not need to build a syntax tree to directly query a query. Therefore, the query efficiency can be improved. This method is suitable for scenarios where the query conditions are fixed but the query frequency is very frequent.
The usage is as follows:
Create a MYSQL_STMT variable and bind it to the corresponding query string. The question mark in the string indicates the variable to be passed in. Each question mark must specify a variable.
Query, enter each specified variable, and input the MYSQL_STMT variable to be executed with an available connection handle.
The Code is as follows:
// 1. Bind Bool CDBManager: BindInsertStmt (MYSQL * connecthandle) { // Insert binding MYSQL_BIND insertbind [FEILD_NUM]; If (m_stInsertParam = NULL) M_stInsertParam = new CHostCacheTable; M_stInsertStmt = mysql_stmt_init (connecthandle ); // Construct the BIND string Char insertSQL [SQL _LENGTH]; Strcpy (insertSQL, "insert into HostCache (SessionID, ChannelID, ISPType ," "ExternalIP, ExternalPort, InternalIP, InternalPort )" "Values (?, ?, ?, ?, ?, ?, ?) "); Mysql_stmt_prepare (m_stInsertStmt, insertSQL, strlen (insertSQL )); Int param_count = mysql_stmt_param_count (m_stInsertStmt ); If (param_count! = FEILD_NUM) Return false; // Fill in the bind structure array. m_sInsertParam is the structure variable associated with this statement. Memset (insertbind, 0, sizeof (insertbind )); Insertbind [0]. buffer_type = MYSQL_TYPE_STRING; Insertbind [0]. buffer_length = ID_LENGTH/*-1 */; Insertbind [0]. buffer = (char *) m_stInsertParam-> sessionid; Insertbind [0]. is_null = 0; Insertbind [0]. length = 0;
Insertbind [1]. buffer_type = MYSQL_TYPE_STRING; Insertbind [1]. buffer_length = ID_LENGTH/*-1 */; Insertbind [1]. buffer = (char *) m_stInsertParam-> channelid; Insertbind [1]. is_null = 0; Insertbind [1]. length = 0;
Insertbind [2]. buffer_type = MYSQL_TYPE_TINY; Insertbind [2]. buffer = (char *) & m_stInsertParam-> ISPtype; Insertbind [2]. is_null = 0; Insertbind [2]. length = 0;
Insertbind [3]. buffer_type = MYSQL_TYPE_LONG; Insertbind [3]. buffer = (char *) & m_stInsertParam-> externalIP; Insertbind [3]. is_null = 0; Insertbind [3]. length = 0;
Insertbind [4]. buffer_type = MYSQL_TYPE_SHORT; Insertbind [4]. buffer = (char *) & m_stInsertParam-> externalPort; Insertbind [4]. is_null = 0; Insertbind [4]. length = 0;
Insertbind [5]. buffer_type = MYSQL_TYPE_LONG; Insertbind [5]. buffer = (char *) & m_stInsertParam-> internalIP; Insertbind [5]. is_null = 0; Insertbind [5]. length = 0;
Insertbind [6]. buffer_type = MYSQL_TYPE_SHORT; Insertbind [6]. buffer = (char *) & m_stInsertParam-> internalPort; Insertbind [6]. is_null = 0; Insertbind [6]. is_null = 0; // Bind If (mysql_stmt_bind_param (m_stInsertStmt, insertbind )) Return false; Return true; }
|
Random acquisition records
In some database applications, we do not need to obtain all the records that meet the conditions, but randomly select the records that meet the conditions. This situation is common in the statistical analysis of data services, where a small amount of data is obtained from large-capacity databases.
There are two ways to do this:
1. In the conventional method, all records meeting the conditions are first queried, and some records are randomly selected. This method is inefficient when many records meet the condition.
2. Use the limit syntax to obtain the number of records that meet the conditions first, and then add limit to the SQL query statement to limit that only a specified record meets the requirements. Although this method needs to be queried twice, it is more efficient when the data volume is large.