PostgreSQL index types and index bloating

Source: Internet
Author: User
Tags create index modifiers postgresql

warehouse_db=# CREATE TABLE item (item_id Integer not null,item_name text,item_price numeric,item_data text);
CREATE TABLE
warehouse_db=# CREATE INDEX Item_idx on item (ITEM_ID);
CREATE INDEX

warehouse_db=# \di Item_idx
List of relations
Schema | Name | Type | Owner | Table
--------+----------+-------+----------+-------
Public | Item_idx | Index | Postgres | Item
(1 row)
warehouse_db=# \h CREATE Index
Command:create INDEX
Description:define a new index
Syntax:
CREATE [UNIQUE] INDEX [concurrently] [name] on table_name [USING Method]
({column_name | (expression)} [COLLATE Collation] [Opclass] [ASC | DESC] [NULLS {First | Last}] [, ...]
[With (Storage_parameter = value [, ...])]
[Tablespace Tablespace_name]
[WHERE predicate]
warehouse_db=# \di
List of relations
Schema | Name | Type | Owner | Table
--------+--------------------------------+-------+----------+---------------
Public | Prim_key | Index | Postgres | Warehouse_tb1
Public | Prm_key | Index | Postgres | History
Public | Cards_card_id_owner_number_key | Index | Postgres | Cards
Public | Item_idx | Index | Postgres | Item
Public | Item_item_id_idx | Index | Postgres | Item
Public | Movies_title_copies_excl | Index | Postgres | Movies
Public | Tools_pkey | Index | Postgres | Tools
(7 rows)

warehouse_db=# \di Item_item_id_idx
List of relations
Schema | Name | Type | Owner | Table
--------+------------------+-------+----------+-------
Public | Item_item_id_idx | Index | Postgres | Item
(1 row)

warehouse_db=# DROP Index Item_item_id_idx;
DROP INDEX
Http://www.postgresql.org/docs/9.4/static/sql-createindex.html.
Types of Index
Single index
CREATE INDEX INDEX_NAME on table_name (column);
warehouse_db=# CREATE INDEX Item_single_index on item (ITEM_ID);
CREATE INDEX
warehouse_db=# CREATE INDEX Item_multi_index on item (item_id,item_price);
CREATE INDEX
Partial index:creating an index on the subset of the table

CREATE INDEX index_name on table_name (column) WHERE (condition);

warehouse_db=# CREATE INDEX Item_partial_index on item (ITEM_ID) WHERE
(ITEM_ID < 106);
warehouse_db=# \d Item;
Table "Item"
Column | Type | Modifiers
------------+--------------------+-----------
item_id | Integer | NOT NULL
Item_name | Text |
Item_price | numeric |
Item_data | Text |
Indexes:
"Item_index" Btree (item_id)
"Item_multi_index" Btree (item_id, Item_price)
"Item_partial_index" Btree (item_id) WHERE item_id < 106

The unique index
A unique index can created on any column;it not only creates an index, but also
Enforces uniqueness of the column.

warehouse_db=# CREATE UNIQUE INDEX item_unique_idx on item (ITEM_ID);
CREATE INDEX
time:485.644 ms
warehouse_db=# \d Item_unique_idx;
List of relations
Schema | Name | Type | Owner | Table
--------+-----------------+-------+----------+-------
Public | item_unique_idx | index | postgres | item
(1 row)

We can create a unique index explicitly using the Create unique
Index command and that it can created implicitly By declaring a primary key on a table.
warehouse_db=# CREATE TABLE Item
warehouse_db-# (
warehouse_db (# item_unique Integer primary key,
warehouse_db (# item_name text,
warehouse_db (# item_price Numeric,
warehouse_db (# item_data text);
CREATE table
warehouse_db=# \d Item
TABLE "Public.item"
Column | Type | Modifiers
-------------+---------+-----------
Item_unique | integer | NOT NULL
Item_name | text |
Item_price | numeric |
Item_data | text |
Indexes:
"Item_pkey" PRIMARY KEY, Btree (item_unique), tablespace "Tbs_yl"
Tablespace: "Tbs_yl"

Example of a implicit creation of a unique index by defining unique
Constraints
warehouse_db=# ALTER TABLE item add constraint Primary_key unique (item_unique);
ALTER TABLE
warehouse_db=# \d Item;
Table "Public.item"
Column | Type | Modifiers
-------------+---------+-----------
Item_unique | Integer | NOT NULL
Item_name | Text |
Item_price | numeric |
Item_data | Text |
Indexes:
"Item_pkey" PRIMARY KEY, Btree (item_unique), tablespace "Tbs_yl"
"Primary_key" UNIQUE CONSTRAINT, Btree (item_unique), tablespace "Tbs_yl"
Tablespace: "Tbs_yl"
The ALTER command adds a unique constraint to the item_id column and can is used as
The primary key.

Explicitlycreate a unique index explicitly using the already discussed CREATE index
Command as follows:
warehouse_db=# CREATE TABLE Item (
warehouse_db (# item_id Integer PRIMARY key,
warehouse_db (# item_name text,
warehouse_db (# Item_price Numeric,
warehouse_db (# item_data text);
CREATE TABLE
warehouse_db=#
warehouse_db=# \d Item
Table "Public.item"
Column | Type | Modifiers
------------+---------+-----------
item_id | Integer | NOT NULL
Item_name | Text |
Item_price | numeric |
Item_data | Text |
Indexes:
"Item_pkey" PRIMARY KEY, Btree (item_id), tablespace "Tbs_yl"
Tablespace: "Tbs_yl"
warehouse_db=# Create unique index idx_unique_id on item (ITEM_ID);
CREATE INDEX
warehouse_db=# \d Item;
Table "Public.item"
Column | Type | Modifiers
------------+---------+-----------
item_id | Integer | NOT NULL
Item_name | Text |
Item_price | numeric |
Item_data | Text |
Indexes:
"Item_pkey" PRIMARY KEY, Btree (item_id), tablespace "Tbs_yl"
"idx_unique_id" unique, Btree (item_id), tablespace "Tbs_yl"
Tablespace: "Tbs_yl"

warehouse_db=# INSERT into item values (1, ' boxing ', $, ' glaves ');
INSERT 0 1
warehouse_db=# INSERT into item values (1, ' Hockey ', ' shoes ');
Error:duplicate key value violates UNIQUE constraint "Item_pkey"
Detail:key (item_id) = (1) already exists.
warehouse_db=# INSERT into item values (2, ' hockey ', ' shoes ');
INSERT 0 1

The expression index
For example, if we want to search for a case-insensitive item name,
Then the normal to doing this is as follows:
warehouse_db=# SELECT * from item WHERE UPPER (item_name) like ' COFFEE ';
The preceding query would scan each row or table and convert Item_name to uppercase and
Compare it with COFFEE; This is really expensive. The following is the command to create
An expression index on the Item_name column:
warehouse_db=# CREATE INDEX Item_expression_index on item (upper (Item_name));
CREATE INDEX
warehouse_db=# \d Item;
Table "Public.item"
Column | Type | Modifiers
------------+---------+-----------
item_id | Integer | NOT NULL
Item_name | Text |
Item_price | numeric |
Item_data | Text |
Indexes:
"Item_pkey" PRIMARY KEY, Btree (item_id), tablespace "Tbs_yl"
"idx_unique_id" unique, Btree (item_id), tablespace "Tbs_yl"
"Item_expression_index" Btree (Upper (Item_name)), tablespace "Tbs_yl"
Tablespace: "Tbs_yl"

The implicit index
An index, which is created automatically by the database was called an implicit index. The
Primary KEY or UNIQUE constraint implicitly creates an index on the that column.

Index
Creation on a table was a very expensive operation, and on a sizeably huge table, it can take
Hours to build an index. This can cause difficulty in regards to performing any write
Operations. To solve this issue, PostgreSQL have the concurrent index, which is useful
When you need to add indexes in a live database.

The syntax of a concurrent index is as follows:
CREATE INDEX concurrently index_name on table_name using btree (column);

The concurrent index is slower than the normal index because it completes index building
In the parts. This can is explained with the help of the following example:

warehouse_db=# CREATE INDEX idx_id on item (ITEM_ID);
time:8265.473 ms
Time taken in creating a concurrent index IDX_ID using CREATE index concurrently:
warehouse_db=# CREATE INDEX concurrently idx_id on item (ITEM_ID);
time:51887.942 ms

Index types
PostgreSQL supports the B-tree, hash, GiST, and GIN index methods. The Index method
Or type can be selected via the USING method. Different types of indexes have Different
purposes, for example, the B-tree index was effectively used when a query involves the
Range and equality operators and the hash index is effectively used when the equality
Operator is used in a query.
Here are a simple example of what to use the index types:
warehouse_db=# CREATE INDEX index_name on table_name USING btree (column);
The B-tree Index
The B-tree index is effectively used when a query involves the equality operator (=) and
Range operators (<, <=,;, >=, between, and in).

The hash Index
Hash indexes is utilized when a query involves simple equivalent operators only. Here,
We create a hash index on the item table. You can see in the following example that the
Planner chooses the hash index in the case of a equivalent operator and does not utilize
The hash index in the The range operator:
The hash index is the best for queries, which has equivalent operators in the
WHERE clause. This can is explained with the help of the following example:
warehouse_db=# EXPLAIN SELECT COUNT (*) from item WHERE item_id = 100;
QUERY PLAN
------------------------------------------------------------------
Aggregate (cost=8.02..8.03 Rows=1 width=0)
-Index Scan using Item_hash_index on item (cost=0.00..8.02 Rows=1
width=0)
Index Cond: (item_id = 100)
(3 rows)
The hash index method is not suitable for range operators, so the planner would not select a
Hash index for range queries:
warehouse_db=# EXPLAIN SELECT COUNT (*) from item WHERE item_id > 100;
QUERY PLAN
------------------------------------------------------------------
Aggregate (cost=25258.75..25258.76 Rows=1 width=0)
Seq Scan on item (cost=0.00..22759.00 rows=999900 width=0)
Filter: (item_id > 100)
(3 rows)

To get the size of a table and an index, we can use the following:
SELECT pg_relation_size (' table_name ')
as table _size,pg_relation_size (' index_name ') index_size
from Pg_tables WHERE table_name like ' table_name ';
The GiST index
the generalized Search Tree (GiST) index provides the possibility to create custom
data types with in Dexed access Methods. It additionally provides an extensive set of
queries.
It can be utilized for operations beyond equivalent and range comparisons. The GiST
Index is lossy, which means that it can create incorrect matches.
The syntax of the GiST index is as follows:
warehouse_db=# CREATE index index_name on table_name USING
GiST (column_n AME);

The GIN index
"GIN stands for generalized Inverted index. GIN is designed for handling cases
where the items to was indexed are composite values, and the queries to being handled by
The index need to search for element values appear within the composite items.
For example, the items could is documents, and the queries could be searches for
documents containing specific word S "

Here are the syntax for the creation of a GIN index:
warehouse_db=# CREATE index index_name on table_name USING
GI N (column_name);
The GIN index requires three times more space than GiST, but is three times faster than
GiST.
warehouse_db=# Create extension PG_TRGM;
Create EXTENSION
time:117.645 ms
warehouse_db=# CREATE table words (Lineno int,simple_words Text,special_ words text);
CREATE TABLE
time:32.913 ms
warehouse_db=# INSERT into words values (generate_series (1,2000000), MD5 (random ( ):: Text), MD5 (Random ():: text));
INSERT 0 2000000
time:18268.619 ms
warehouse_db=# Select COUNT (*) from words where simple_words like '%a31% ' an D special_words like '%a31% ';
Count
-------

(1 row)

time:669.342 ms
warehouse_db=# CREATE index words_idx on words (simple_words,special_words);
CREATE INDEX
time:22136.229 ms
warehouse_db=# Select COUNT (*) from the words where simple_words like '%a31% ' and special_words like '%a31% ';
Count
-------
115
(1 row)

time:658.988 ms
warehouse_db=# CREATE index words_idx on words using gin (simple_words gin_trgm_ops,special_words gin_trgm_ops);
Error:relation "Words_idx" already exists
time:0.952 ms
warehouse_db=# DROP Index Words_idx;
DROP INDEX
time:75.698 ms
warehouse_db=# CREATE index words_idx on words using gin (simple_words gin_trgm_ops,special_words gin_trgm_ops);
CREATE INDEX
TIME:271499.350 ms
warehouse_db=# Select COUNT (*) from the words where simple_words like '%a31% ' and special_words like '%a31% ';
Count
-------
115
(1 row)

time:10.260 ms

Http://www.sai.msu.su/~megera/wiki/Gin
Http://www.postgresql.org/docs/9.4/static/pgtrgm.html

Index bloating

As the architecture of PostgreSQL is based on MVCC, tables has the difficulty of dead
Rows. Rows that is not visible to any transaction is considered dead rows. In a
Continuous table, some rows are deleted or updated. These operations cause dead space in
a table. Dead space can potentially was reused when new data is inserted. Due to a lot of
Dead rows, bloating occurs. There is various reasons for index bloating, and it needs to
Be fixed to achieve more performance, because it hurts the performance of the database.
AUTO VACUUM is the best obviation from bloating, but it's a configurable parameter and can
Be incapacitated or erroneously configured. There is multiple ways to fix index bloating;
To know more on MVCC, check out
Http://www.postgresql.org/docs/current/static/mvcc-intro.html

Dump and restore
In the case of bloating, the simplest-the-prevention-is-to-back-the-table utilizing
Pg_dump, drop the table, and reload the data into the initial table. This was an expensive
Operation and sometimes seems too restrictive.

Vcuum
Vacuuming the table using the VACUUM command is another solution that can be used to fix
The bloat. The VACUUM command reshuffles the rows to ensure and the page is as full as
Possible, but the database file shrinking only happens when there is percent empty pages
At the end of the file. The only case where VACUUM is useful to reduce the bloat. Its
Syntax is as follows:
VACUUM table_name
The following example shows the usage of the VACUUM on the Item table:
warehouse_db=# VACUUM Item;
The other, the using VACUUM is as follows:
warehouse_db=# VACUUM full item;


CLUSTER
As we discussed previously, rewriting and reordering of rows can fix the issue that can be
Indirectly achieved using Dump/restore, but the is a expensive operation. The other
To does this is the CLUSTER command, which are used to physically reorder rows based on the
Index. The CLUSTER command is used to create a whole initial copy of the table and the old
Copy of the data is dropped. The CLUSTER command requires enough space, virtually twice
The disk space, to the initial organized copy of the data. It syntax is as follows:
CLUSTER table_name USING index_name
As we discussed previously, rewriting and reordering of rows can fix the issue that can be
Indirectly achieved using Dump/restore, but the is a expensive operation. The other
To does this is the CLUSTER command, which are used to physically reorder rows based on the
Index. The CLUSTER command is used to create a whole initial copy of the table and the old
Copy of the data is dropped. The CLUSTER command requires enough space, virtually twice
The disk space, to the initial organized copy of the data. It syntax is as follows:
CLUSTER table_name USING index_name

Reindexing
If an index becomes inefficient due to bloating or data becomes randomly scattered and then
Reindexing is required to get the maximum performance from the index. It syntax is as
follows:
warehouse_db=# REINDEX TABLE Item;

Points to ponder
When using a index, you need to keep on mind the following things:
It would make sense to the index a table column when you have a handsome number of
Rows in a table.
When retrieving data is need to make sure that good candidates for an index is
Foreign keys and Keys where min () and Max () can is used when retrieving data.
This means column selectivity was very important to index effectively.
Don ' t forget to remove unused indexes for better performance. Also, perform
REINDEX on all indexes once a month to clean up the dead tuples.
Use table partitioning along with a index if you have large amounts of data.
When your is indexing columns with null values, consider using a conditional index
With WHERE column_name is not NULL.

PostgreSQL index types and index bloating

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.