How to clean up the work after the SQLite database deletes a large amount of data

Source: Internet
Author: User

Sqlce is too slow to compare the execution speed with SQLite.

Therefore, in the next version of the project, replace sqlce with SQLite.

After all, SQLite is not used.

The results are indeed good. Two points are satisfactory.

 

First, the execution speed should be needless to say. The execution speed on Monday is faster than that on sqlce.

 

Second, it may be a weakness for others, but I still need it (it is a stupid solution I have come up with. If you have any good solutions, please give me some advice)

What are the advantages? Compression ratio.

I don't know if anyone has used WinZip to pack the. DB file of SQLite? The compression rate is actually very high.

For example, the same two databases, one sqlce. SDF file and one SQLite. DB file.

The data content is identical, and the Occupied Space on the hard disk is

Sqlce: 3,207,168 byte, SQLite: 3,117,056 byte

 

Compress With WinRAR. The result is

Sqlce: 3,158,016 byte, SQLite: 1,159,168 byte

 

As you can see, the data volume after sqlce compression has basically not changed. Since SQLite is compressed, the data volume becomes nearly 1/3.

What does this mean? This means that the data volume transmitted over the network is greatly reduced in transmission time.

According to the "China does not move" rate, you can save a lot of calls to pay head taxes.

Based on the "China connectivity" network, transmission time can be greatly reduced and the probability of connection failure can be reduced.

 

In fact, it is mainly my project. Every time I perform database synchronization, if I want to insert and Update 10 thousand data records cyclically on the mobile phone end, it takes too long and cannot be tolerated. So I decided to send the database file back to the server, update it on the server, and then send it back to the mobile phone. So this compression problem is very important to me.

Second, compression is very important for me. I transmit data through soap. This method is stupid but easy to implement through XML. The problem is that the file above 3 m is serialized into XML, and the transfer will be unreasonable.

I encountered the stringbuilder error on a machine that has not yet been released. stringbuilder is used for XML processing at the bottom layer of soap, and a problem occurs at this time. I think it is because 3 m of data is too big. (I guess I will test this machine in the afternoon)

For these reasons, I need the most "lightweight" way to transmit data over the network. It is definitely compressed.

 

After talking about this, I did not enter the topic. My topic is to wipe the ass of SQLite after deleting data.

You may find that after you delete a large amount of data in SQLite, the size of the database file remains the same and remains unchanged.

For my project, after 3 M data is deleted. the DB file is still 3 MB, and the result I want is actually only 35 Kb, which is about 10 KB after compression. Why is this problem? The. SDF file of sqlce is deleted or subtracted.

 

The reason is:

After you delete data from SQLite, unused disk space is added to an internal "Idle list" to store the data you insert next time. The disk space is not lost. However, disk space is not returned to the operating system. (Hey hey, I mentioned this in sqlite.org's FAQ)

 

Solution:

1. After the data is deleted, manually execute the VACUUM command. The execution method is simple.

 

Objsqlhelper. executenonquery (commandtype. Text, "vacuum ")

 

The vacuum command clears the "Idle list" and compresses the database size to a minimum. However, it takes some time.

Fqa says that in Linux, it takes about 0.5 seconds/m. In addition, it takes twice as much space as the database file.

I hate fqa. He only talks about the system environment, not the hardware environment of the machine. I used nearly 13 seconds to compress space on the test mobile phone. Another part of the space it occupies is a temporary file with a. DB-Journal suffix. (This question does not matter to me now .)

 

2. Set auto_vacuum to "1" during database file creation ".

Note: The auto-vacuum tag can be changed only when no table is created in the database. If you try to modify an existing table, no error is returned.

Cmd. commandtext = "Pragma auto_vacuum = 1 ;"

Cmd. executenonquery ()

When auto-vacuum is enabled, the database file is automatically shrunk when a transaction is submitted to delete data from the database.

The database stores some internal information to support this function, which makes the database file slightly larger than when this option is not enabled.

My table structure does not contain any data. The size of the database file is about 25 K. After auto_vacuum is enabled, it is 26 K.

After the basic data is inserted and run, the file is changed to 35 KB. After auto_vacuum is enabled, the file size is 36 KB.

It doesn't matter if it doesn't change much.

 

However, the second method has the same disadvantage: It only truncates pages in the idle list from the database file, does not recycle fragments in the database, and does not refresh the database content like the vacuum command. In fact, auto-vacuum generates more fragments because the page needs to be moved in the database file. In addition, the. DB-Journal File is generated when the delete operation is executed.

To use auto-vacuum, some prerequisites are required. The database needs to store some additional information to record that each database page it tracks retrieves its pointer position. Therefore, auto-vacumm must be enabled before the table is created. After a table is created, auto-vacumm cannot be enabled or disabled.

 

In fact, according to the running time comparison, after the two large delete operations, the time from 3 m to 35 K is almost the same. The execution of the vacuum command is a little longer, but it cannot be much longer, relatively speaking, such a little bit of length can be ignored.

In addition, the auto method causes fragmentation. If the number of data exchanges is large, this method is not suitable.

It is decided to use the first method, after the big data is deleted, before sending a message to the server. Run the VACUUM command. This is more cost-effective.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.