First, create an index:
In SQLite, the SQL syntax to create an index is essentially the same as most other relational databases, because this is simply a sample usage:
Copy Code code as follows:
Sqlite> CREATE TABLE testtable (first_col integer,second_col integer);
--Create the simplest index, which is based on a field in a table.
Sqlite> CREATE INDEX testtable_idx on testtable (First_col);
--Create a federated index that is based on multiple fields of a table, and you can specify the collation (ascending/descending) of each field.
Sqlite> CREATE INDEX testtable_idx2 on testtable (First_col asc,second_col DESC);
--Creates a unique index that has the same rule as the uniqueness constraint for the data table, that is, null and any values, including null itself.
sqlite> CREATE UNIQUE INDEX testtable_idx3 on testtable (Second_col DESC);
sqlite>. Indices testtable
Testtable_idx
Testtable_idx2
Testtable_idx3
As you can see from the output of the. Indices command, three indexes were created successfully.
Second, delete the index:
The deletion of the index is very similar to the deletion of the view, as is the meaning of this, so here is just an example to show:
Copy Code code as follows:
sqlite> DROP INDEX Testtable_idx;
-If deleting an index that does not exist will cause the operation to fail, and if you do not want the error to be thrown if you are unsure, you can use the "if EXISTS" clause.
sqlite> DROP INDEX Testtable_idx;
Error:no such index:testtable_idx
sqlite> DROP INDEX IF EXISTS testtable_idx;
third, the reconstruction index:
Rebuilding an index deletes an existing index and rebuilds it based on its original rule. It should be explained that if the database name is not given after the REINDEX statement, all indexes in all attached databases under the current connection will be rebuilt. If a database name and table name are specified, all indexes in the table are rebuilt, and if only the index name is specified, the specified index of the current database is rebuilt.
Copy Code code as follows:
--The current connection attached the indexes in all databases are rebuilt.
Sqlite> REINDEX;
--Rebuilds all indexes of the TestTable table in the current primary database.
Sqlite> REINDEX testtable;
--Rebuilds the index with the name TESTTABLE_IDX2 in the current primary database.
Sqlite> REINDEX testtable_idx2;
iv. Data analysis:
Similar to PostgreSQL, the Analyze command in SQLite is also used to analyze data in data tables and indexes, and to store the results in SQLite's internal system tables so that the query optimizer can select the optimal query execution path based on the statistical data that is analyzed. Thus improving the efficiency of the entire query. See the following example:
Copy Code code as follows:
--If no arguments are specified after the Analyze command, the tables and indexes in all attached databases in the current connection are parsed.
Sqlite> ANALYZE;
--If the database is specified as a analyze parameter, all tables and indexes under that database are parsed and the statistics are generated.
sqlite> ANALYZE Main;
--If you specify a table in the database or an index that is analyze, the table and all of its associated indexes are parsed.
Sqlite> ANALYZE main.testtable;
Sqlite> ANALYZE main.testtable_idx2;
v. Data cleansing:
Compared to the vacuum commands in PostgreSQL, they are very similar in functionality and implementation, except that PostgreSQL provides finer granularity, and SQLite can only act on the command to the database, and no longer be able to pinpoint the data table or index specified in the database. This, however, is precisely what PostgreSQL can do.
When one or more data tables in a database have numerous inserts, updates, and deletions, a large amount of disk space is consumed by the deleted data, and SQLite does not return them to the operating system until the vacuum command is executed. Because the data storage in this kind of data table is very dispersed, it is impossible to get better volume IO reading effect in the query, thus affecting the query efficiency.
In SQLite, only the primary database in the current connection is supported, and no other attached databases can be cleaned up. The vacuum command uses the same policy as PostgreSQL when it finishes data cleanup. That is, create a new database file of the same size as the current database file, and then import the data from the database file into the new file, where the deleted chunks will not be imported, after the import is completed , shrink the size of the new database file to the appropriate size. The execution of this command is simple, such as:
Copy Code code as follows: