"JDBC" in-depth understanding of statement and PreparedStatement

Source: Internet
Author: User
Tags one table rowcount stmt unique id

First, use statement instead of PreparedStatement object

The optimization of the JDBC driver is based on what features are used. Choosing PreparedStatement or statement depends on how you want to use them. It is best to select statement for SQL statements that execute only once. Conversely, it is best to select PreparedStatement if the SQL statement is executed multiple times.
The first execution consumption of PreparedStatement is very high. Its performance is reflected in the subsequent repeated executions. For example, suppose I use the employee ID and use prepared to execute a query against the employee table. The JDBC driver sends a network request to the data to parse and optimize the query. When executed, another network request is generated. In the JDBC driver, reducing network traffic is the ultimate goal. If my program only needs one request during the run, then use statement. For statement, the same query produces only one network-to-database communication.
This guideline is a bit complex with preparedstatement pools. When using the PreparedStatement pool, you can use statement if a query is special and is not executed again. If a query is rarely executed, but the statement pool in the connection pool may be executed again, use PreparedStatement. In the same situation that is not the statement pool, use statement.

Second, the use of PreparedStatement batch function

When you update a large amount of data, prepare an INSERT statement multiple times, causing many times of network connectivity. To reduce the number of JDBC calls to improve performance, you can use PreparedStatement's Addbatch () method to send multiple queries to the database at once. For example, let's compare the following example.
Example 1: Multiple executions of prepared Statement

    PreparedStatement PS = conn.preparestatement (            "INSERT into employees values (?,?,?)" );           for (n = 0; n <; n++) {           ps.setstring (name[n]);           Ps.setlong (Id[n]);           Ps.setint (Salary[n]);           Ps.executeupdate ();         }    

Example 2: Using Batch

   PreparedStatement PS = conn.preparestatement (            "INSERT into employees values (?,?,?)" );           for (n = 0; n <; n++) {           ps.setstring (name[n]);           Ps.setlong (Id[n]);           Ps.setint (Salary[n]);           Ps.addbatch ();         }         Ps.executebatch ();   

In Example 1, PreparedStatement is used to execute the INSERT statement multiple times. Here, 100 insert operations were performed, with a total of 101 network round trips. Of these, 1 roundtrip is the pre-stored statement, and the other 100 round trips perform each iteration. In Example 2, when you use the Addbatch () method in 100 insert operations, only two network round trips are used. 1 round trips are pre-stored statement, and another is the Execute batch command. Although the batch command uses more CPU cycles for the database, performance is improved by reducing network round trips. Remember, the greatest improvement in JDBC performance is to reduce the network traffic between the JDBC driver and the database.
Note: The JDBC driver for oracel 10G limits the maximum batch size to 16,383, and if addbatch exceeds this limit, an "invalid batch value" (Invalid batch values) exception occurs at ExecuteBatch. Therefore, if you are using oracle10g, the Batch size needs to be controlled at a certain limit before this bug is reduced.

Third, select the appropriate cursor type

The cursor type should apply to your application to the fullest extent possible. This section focuses on performance issues for three cursor types.
The FORWARD-ONLY cursor provides the best performance for cases where all records are read sequentially from one table. There is no faster way to get the data in a table than using the forward-only cursor. However, the cursor cannot be used when the data row must be processed in a sequential manner in the program.
Using the scroll-insensitive cursor with JDBC is an ideal choice for applications that require data synchronization with the database and the ability to move the cursor before and after the result set. This type of cursor acquires all the data on the first request (it may be a lot of data when the JDBC driver takes the data in ' lazy ' mode) and is stored on the client. Therefore, the first request can be very slow, especially when requesting long data. The next request does not cause any network round trips (perhaps only limited network traffic when using the ' lazy ' method) and is processed quickly. Because the first request is slow, the scroll-insensitive cursor should not be used for single-row data acquisition. Developers should also avoid using the scroll-insensitive cursor when it comes to returning long data, as this may result in memory exhaustion. Some scroll-insensitive cursors are implemented by caching data in a temporary table in the database to avoid performance problems, but most still cache the data in the application.
The scroll-sensitive cursor, sometimes referred to as the Keyset-driven cursor, uses identifiers such as the rowid of the database. The data for the identifier is re-recreated each time the cursor is moved in the result set. Performance can be slow because each request will have a network round trip. In any case, it is not helpful to improve performance by returning results rows in an unordered manner.
Now let's explain this and look at the situation. A program to return 1000 rows of data to the program normally. The JDBC driver does not execute the SELECT statement provided by the program at execution time or when the first line is requested. Instead, it replaces the select query with a key identifier, for example, ROWID. The modified query is then executed by the driver, and then all 1000 key values are fetched from the database. Each request for a row of results causes the JDBC driver to find the corresponding key value directly from the local cache and constructs a ' WHERE rowid= '. ' clause, then executes the modified query and finally gets the data row from the server.
When the program does not provide enough cache like the scroll-insensitive cursor, the scroll-sensitive cursor can be substituted for the dynamic scrollable cursor.

Iv. use of effective getter methods

JDBC provides several ways to get data from resultset, such as Getint (), getString (), and GetObject (), and so on. The GetObject () method is the most generalized and provides the worst performance. This is because the JDBC driver must make additional processing of the type of value to be obtained to map to a specific object. Therefore, the appropriate method is used for a particular data type.
to further improve performance, you should provide the index number of the field when you get the data, for example, getString (1), Getlong (2), and Getint (3) To replace the field name. If the field index number is not specified, network traffic will not be affected. However, the cost of converting and finding is increased. For example, suppose you use GetString ("foo") ... The JDBC driver may capitalize the field name (if necessary) and find the "Foo" field in the list of field names by comparison. If you can, using the field index directly will save you a lot of processing time.
For example, suppose you have a resultset of 100 rows and 15 columns, and the field names are not included. You are interested in three fields EmployeeName (String type), EmployeeNumber (Long Integer), and Salary (integer type). If you specify GetString ("EmployeeName"), Getlong ("EmployeeNumber"), and Getint ("Salary"), the query drought each field name must be converted to metadata in the corresponding case, Before the lookup. If you use GetString (1), Getlong (2), and Getint (15). Performance can be significantly improved.
v. Get auto-generated key values
There are many databases that provide a unique key value for each row of records in the hidden column table. Typically, using these field types in a query is the quickest way to get record values, because these suppressed columns typically reflect the physical location of the data on disk. Before JDBC3.0, an application can obtain the value of an implied column only after inserting the data by executing a SELECT statement immediately.
Example before 3:jdbc3.0

    // Insert Row         int rowcount = stmt.executeupdate (            "insert into Localgeniuslist (name) VALUES (' Karen ')"); c9/>//  now gets the disk position for the newly inserted row-     rowid    ResultSet rs = stmt.executequery (            "Select rowID from localgeniuslist where name = ' Karen '");     

There are two main drawbacks to this method of obtaining implicit columns. First, the obtained implicit column is in a separate query, which is sent to the server through the network before execution. Second, because it is not a primary key, the query condition may not be a unique ID in the table. In the following example, it is possible to return the values of multiple suppressed columns, and the program cannot know which is the value of the last inserted row.
(Translator: There are different ways to return ROWID because of different database support levels.) In SQL Server, the ID of the last inserted record can be returned with such a query statement: SELECT @IDENTITY)
An optional feature in the JDBC3.0 specification provides the ability to obtain automatically generated key values for records that have just been inserted into a table.
Example 4:jdbc3.0 after

    int rowcount = stmt.executeupdate (            "insert into Localgeniuslist (name) VALUES (' Karen ')",          // insert row and return key value          Statement.return_generated_keys);          // get the generated key value         ResultSet rs = Stmt.getgeneratedkeys ();     

Now, the program contains a unique ID that can be used as a query condition to quickly access data rows, even if there are no primary keys in the table.
This method of acquiring auto-generated key values gives the JDBC developer the flexibility to improve the performance of the Access data.

Vi. Choosing the right data type

Receiving and sending some data can be costly. When you design a schema, you should choose the type of data that can be handled most efficiently. For example, an integer is faster than a floating-point or real-number processing. Floating-point numbers are defined in terms of the internal format of the database, usually in a compressed format. The data must be decompressed and converted to a different format so that it can be processed by the data protocol.

Vii. acquisition of ResultSet

Because the database system has limited support for scrollable cursors, many JDBC drivers do not implement scrollable cursors. Do not call the Rs.last () and Rs.getrow () methods to find the maximum number of rows in a dataset unless you are sure that the database supports the result set of the scrollable cursor. Because the JDBC driver simulates a scrollable cursor, calling Rs.last () causes the driver to move through the network to the last row of the dataset. Instead, you can use ResultSet to iterate through a count or use the Count function of a select query to get the number of data rows.
In general, do not write code that relies on the number of result set rows, because the driver must get all the datasets to know how many rows of data the query will return.

Original connection: In-depth understanding of statement and Preparestatement

"JDBC" in-depth understanding of statement and PreparedStatement

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.