First contact with the development of IOS developed to understand that a cache database is SQLite, the back has been to SQLite as the backbone of the use, has not been exposed to a large number of data reading and writing, so in the performance optimization concerns not much, this time a specific scene of more data batch read and write a performance optimization, Increased performance by 10 times times.
The approximate application scenario is this:
Every time the program starts, it pulls some data from the server, synchronizes the updates to the local database two tables, writes without existing, and updates its fields. There are dozens of of times when data is low and thousands more.
Because the cached data may have asynchronous simultaneous reads and writes, so do a background sync queue, all the cached database operations are in this queue, and then I monitored the Write database key code execution time consuming, 1000 data updates to the database can take 30 seconds, Disk writes in 1.5m/s floating, Although there is no card main thread, this consumption is not tolerated even in the background.
The core of the database operation is probably like this
For 1000: {
Select-> Update or insert
Select-> Update or Insert
}
Since two tables are involved, there will be two times, and after testing, there is very little information on the Select once, but Update or Insert ([Fmdatabasequeue executeupdate:]) is consumed because it is written to disk, Then think of all the SQL statements can be spliced together, the last only think once, and then think that SQLite is not a business (Transaction), and then tried to take advantage of FMDB transaction operations, before the start of the cycle [db BeginTransaction], loop End [DB commit], wrap it up.
Increase the logic behind the transaction:
BeginTransaction for
1000: {
Select-> Update or insert
Select-> Update or insert
}
Commit
The test worked very well, and the entire time elapsed from 30 seconds down to about 2.8 seconds, adding only two lines of code.
Summarize:
After the pit, through the ridge, are after the experience
Although the use of transaction tricks to improve performance, but it is not safe to do so, in the context of this part of the data is absolutely consistent requirements is not too high.
Simulator and real machine sometimes testing does not reproduce the same problem, because the owning schema, CPU, hard drive are different, so the performance test is best to the real machine. When the problem is tested in the simulator many problems are not, because the hard disk than the real machine read and write speed is high, so avoid a lot of problems, testing time is not found.
Database design and design time to consider more, think about how to expand the future, how to upgrade, read and write when the performance how