PostgreSQL 9.5.1 official document Study Notes

Source: Internet
Author: User
Tags install mongodb

PostgreSQL 9.5.1 official document Study Notes

This document is applicable to the latest version of PostgreSQL 9.5.1. I am new to PostgreSQL and may not miss this article. please correct me. With more understanding, this article [may] will be occasionally updated and supplemented.

JSON

PostgreSQL supports Json data in two types: json and jsonb. The two differ in efficiency. jsonb stores formatted binary data, so the json type is faster when writing data, during retrieval (note that the retrieval is not simply to read the entire data, but to retrieve the value of a key in json data), jsonb is more efficient. Generally, you can use jsonb. Json data is used to make up for the scalability and scalability of relational data. However, as mentioned in the document, you cannot put all the data in it. You need to consider the data atomicity and data size.

The json type can be used to determine whether to include or existence, indicating that the symbols are @> and? (And other variants ). For these two judgment scenarios involving multiple keys and elements, the json type is more suitable than the arrays mentioned below, because it has an internal optimization mechanism for queries, array is just a linear query.

If a json column needs to be searched frequently, you can create a GIN index on it. jsonb supports two types of GIN indexes, jsonb_ops and jsonb_path_ops. The syntax is as follows:

Create index idxgin ON api using gin (jdoc); create index idxginp ON api using gin (jdoc jsonb_path_ops); -- The jsonb_path_ops mark is added more than the previous line.

The jsonb_path_ops supports indexing the @> operator only. For The difference between the two, see The new features of jsonb parsing in PgSQL 9.4 and jsonb in PostgreSQL 9.4.

You can CREATE a gin index (called an Attribute INDEX) for a property in json data, for example, create index idxgintags ON api using gin (jdoc-> 'tags ')); this improves the efficiency of key-value pairs, for example, in the following scenarios:

SELECT jdoc->’guid’, jdoc->’name’ FROM api WHERE jdoc -> ’tags’ ? ’qui’;

Of course, we can also choose not to use Attribute indexes, but to use another query method:

SELECT jdoc->’guid’, jdoc->’name’ FROM api WHERE jdoc @> ’{"tags": ["qui"]}’;

Jsonb also supports btree and hash indexes. These are usually useful only if it's important to check whether ity of complete JSON documents.

Array

PostgreSQL supports the Array type. The field declaration method is as follows:

1 CREATE TABLE emptable (2    arraycol1 integer[],3    arraycol2 text[][],4    arraycol3 text[3],5    arraycol4 integer ARRAY,6    arraycol5 integer ARRAY[4]7 );

When declaring a column, we can specify the element type, dimension, and length in the array. The latter two are at the same time. The current version of PostgreSQL will ignore these two settings, they exist more in the meaning of a remark.

The insert format is as follows:

INSERT INTO emptable VALUES (    ’{10000, 10000, 10000, 10000}’,     ’{{"meeting", "lunch"}, {"training", "presentation"}}’,
  ARRAY[10000, 10000, 10000, 10000],
  ARRAY[[’meeting’, ’lunch’], [’training’, ’presentation’]]);

Note the writing of the string. The single quotation marks of lines 3rd are enclosed by double quotation marks, and the single quotation marks are used as the ARRAY constructor of lines 5th. The length of each element in a multi-dimensional array must be consistent. Otherwise, an error is returned, for example

INSERT INTO emptable VALUES (    ’{10000, 10000, 10000, 10000}’,     ’{{"meeting", "lunch"}, {"training"}}’ -- error);

Access: arraycol [n]. The default subscript of the PostgreSQL array is 1-base. Note that, by default, arraycol [1] should be used to access the array. rather than the usual arraycol [0], of course we can SET arraycol [-] = '{XXOO ,...} 'method to set the upper and lower bounds of the array (this example is changed to-2 base); multi-dimensional array access, take a two-dimensional array as an example, arraycol [n] [m]; if the subscript exceeds the array length, null is returned and no exception is thrown. If you access a part of the array's Adjacent Elements, you need to use the slice format, for example, arraycol [] [], indicating that you want to access 1 to 3 items, and take the two to five items in the three items -- still in array form -- return, the first brackets represent the 1st dimension, the second represents the 2nd dimension, and so on. It should be noted that arraycol [] [2] does not indicate taking 2nd of the 1 to 3 items. PostgreSQL considers that as long as one dimension is in the form of slice, all the dimensions you want to access are in the form of slice. If there is only one digit, 1 is appended to the front, that is, arraycol [] [2] = arraycol [] []. What if the slice subscript exceeds the array length? There are two cases: if the starting subscript is exceeded, an empty array is returned (for historical reasons). If the ending subscript is exceeded, returns the data from the starting subscript to the end of the array.

Some functions: array_dims, which returns all the dimensions of the array in text form; array_length, specify the array length of the dimension; array_upper, return the specified upper bound of the dimension; array_lower, return the specified lower bound of the dimension; cardinality, total number of elements in all dimensions (I wonder if it can be used for subarrays or subdimensions ).

For a one-dimensional array, set arraycol [m] = xxoo. If m is greater than the current length, arraycol automatically expands to the upper bound of m, the entry between the original upper bound and the new upper bound is set to null and repeated. Currently, only one-dimensional arrays have this feature.

Array_prepend, array_append, and array_cat are used for element header and tail insertion or array connection. The first two can only be used for one-dimensional arrays. Generally, we can use a connector | to provide the functions of the three elements.

Array retrieval: any, all, generate_subscripts, array_position, array_positions, & (whether the left operand includes the right operand ). For array retrieval, the official document prompts that the array is not a collection, and the specific elements in the search array usually indicate that your database design is faulty. Array fields can be split into independent tables (with a row for each item of the array ). Obviously, tables are much easier to search, and they can be better stretched when the number of elements is very large.This seems to indicate that the array is designed for direct display. If the business Query Needs to associate a specific value in the array, you need to consider re-designing or using other types..

Plug: when using MySql, we are usually told that using char or varchar with the maximum length will be advantageous in performance, but not necessarily in PostgreSQL. In PostgreSQL, there is no significant performance difference between the three types of string data, and the character (n) type data is generally the slowest, because the Fixed Length leads to more storage space. Therefore, in general, text or character varying is enough.

Functions

PostgreSQL does not have the concept of stored procedures (The blogger does not understand why other databases need to divide stored procedures and functions.). The function returns the result of the last statement [the first row of data]. To return the result set, you need to display the Declaration to return a result set or Table of a certain type. Unless the function is declared to return void, the last statement must be a SELECT, or an INSERT, UPDATE, or DELETE that has a RETURNING clause. you cannot use transaction control commands, e.g. COMMIT, SAVEPOINT, and some utility commands, e.g. VACUUM, in SQL functions. the function body is enclosed by the double "$" symbol or single quotation marks. If you enclose it with single quotation marks, you must note the special character escape. You can reference a parameter in the function body using the parameter name (supported in versions 9.2 and later) or "$ n. For example:

Create function tf1 (accountno integer, debit numeric) RETURNS integer AS $ UPDATE bank SET balance = balance-debit WHERE accountno = tf1.accountno -- because the parameter name and column name have the same name, you need to add the function name as the prefix --; SELECT balance FROM bank WHERE accountno = tf1.accountno; RETURNING balance; $ language SQL;

Add RETURNING colname at the end of insert into or UPDATE. PostgreSQL will return your specified field after inserting or updating data.

The function can receive and return multiple fields. The fields are regarded as a whole and are called a composite type. For example, a ROW in a data table, a ROW constructed using the ROW constructor, or multiple fields separated by commas. We can explicitly define our own composite types, such:

CREATE TYPE inventory_item AS (    name            text,    supplier_id     integer,    price           numeric);

Then you can use inventory_item in many places, or even set the field type of a table to inventory_item, as shown below:

CREATE TABLE on_hand (    item      inventory_item,    count     integer);INSERT INTO on_hand VALUES (ROW('fuzzy dice', 42, 1.99), 1000);

When you create a table, a composite type is automatically created. The name is the same as the table name, indicating the composite type of the table. Note that the constraints defined by the table (such as cannot be blank) are invalid for the automatically created compound type with the same name.

The ROW method above is commonly used for writing compound type values. If multiple fields exist, the ROW can be omitted ('fuzzy dice ', 42, 1.99 ); the general format can also be '("fuzzy dice", 42, 1.99)' -- the outer layer is enclosed in single quotes. We can operate on the entire compound type, or operate on some of its fields. For more information, see.

Return to the function introduction. For example, the following two sections of Code indicate the same meaning:

-- 1 create function new_emp () RETURNS emp AS $ SELECT text 'none' AS name, -- note that the type conversion is 1000.0 AS salary, 25 AS age, point) 'AS cubicle; $ language SQL; -- the field order and type must be consistent with the return type (here it is emp) -- 2 CREATE FUNCTION new_emp () RETURNS emp AS $ select row ('none', 1000.0, 25, '()'): emp; $ language SQL;

Double Colon: Indicates type conversion.

As mentioned above, the function can return a set and a table. The returned table is one of the recently published SQL standards, so it may be better than the returned set. But for the returned table, it is not allowed to use explicit OUT or INOUT parameters with the returns table notation-you must put all the output columns in the TABLE list.

PostgreSQL installation and configuration

I have to say that since I got started with Linux, I have gained a lot of technical access and I am more happy to try something out of. NET's "standard. To install PostgreSQL on CentOS7.0, go to PostgreSQL RPM Building Project-Repository Packages to find the corresponding RPM package and install it with yum:

yum install https://download.postgresql.org/pub/repos/yum/9.5/RedHat/rhel-7-x86_64/pgdg-centos95-9.5-2.noarch.rpm

In the previous step, only the RPM package, then, and postgresql-server are installed (there are other packages, but we can install postgresql-server temporarily)

yum install postgresql95-server

If you have installed version 9.4, you only need to change the number 95 above to 94.

Due to policies for Red Hat family distributions, the PostgreSQL installation will not be enabled for automatic start or have the database initialized automatically. to make your database installation complete, you need to perform these two steps:

/Usr/pgsql-9.5/bin/postgresql95-setup initdb # initialization library systemctl start postgresql-9.5.service # start

Some configurations are required for other hosts to connect to the server. With regard to system parameter configuration, PostgreSQL provides multiple methods with slightly different application scenarios. Select Edit postgresql. conf file, and postgresql. auto. the conf file stores the default values of SYSTEM parameters and cannot be edited directly. You can use the alter system command to set the configuration value. Postgresql. conf is stored in the data directory of PostgreSQL. You can specify the data directory in initdb as follows:

Initdb-D/usr/local/pgsql/data
# Or as follows
Pg_ctl-D/usr/local/pgsql/data initdb

This may be more intuitive if you are using pg_ctl for starting and stopping the server, so that pg_ctl wocould be the sole command you use for managing the database server instance.
If the data directory is not specified, you will be given one by default. The blogger uses the find command to view/var/lib/pgsql/9.5/data /. Because PostgreSQL instances depend on the data directory, you can open multiple instances on a single machine. Each instance has its own data directory, and the configuration is naturally different; to start, stop, or restart pg_ctl, you must include the data directory or specify the PGDATA environment variable. Otherwise, you do not know which instance to operate on.

Find/-name postgresql. conf
# Output/var/lib/postgresql/9.5/data/postgresql. conf

After finding it, you can set it. By the way, you are familiar with vi operations.

1 vi postgresql. conf # open. In normal mode 2/address #, locate listen_addresses3 0 # or the home key, move the cursor to the beginning of the Line 4 X # Delete the front of the well number, that is, you can cancel comments of this row, such as 5 a # A, I, and I. In the edit mode, set listen_addresses to 'localhost, IP address of the development machine '6 <Esc> # Return to general mode 7 4 <Enter> # Move four lines down and locate # port = 5432. Similarly, delete the previous well number 8: wq # Save and exit vi

Use firewall after opening the port centos7:

firewall-cmd  --permanent --zone=public --add-port=5432/tcp

In addition, you must modify the pg_cmd.conf file to allow the development machine to connect (I think it is a bit different from the listen_addresses in postgresql. conf.). Here we will not elaborate. Note that the md5 method is used to indicate that the client needs to connect to the server using the user name and password (encrypted. Restart PostgresQL.

pg_ctl -D /var/lib/pgsql/9.5/data restart

Finally, modify the Default User's postgres password.

# sudo -u postgres psqlpostgres=# ALTER USER postgres WITH PASSWORD 'postgres';

EF CodeFirst with PostgreSQL (suspended)

Postscript: Why PostgreSQL? Relational Database Service (RDS) is the most popular among bloggers, including SQLSERVER and MYSQL. At present, we have basically said goodbye to SQLSERVER. You know, MYSQL is the most popular, which is beyond doubt, but the reason for this popularity is not necessarily the best or the most suitable. Recursion in MYSQL (recursion is not the SQL standard) is basically a pitfall, and it does not seem to keep pace with the times, with weak support for NoSQL, if you say that it only requires the cost of a relational database, some SQL standards are not yet supported, such as LATERAL. PostgreSQL is known as the most advanced database in the world/Universe. Although it is exaggerated, it has comprehensive functions and is open-source. The open-source protocol is MIT, which is more free than the GPL of MYSQL.

For more MongoDB tutorials, see the following:

CentOS compilation and installation of php extensions for MongoDB and mongoDB

CentOS 6 install MongoDB and server configuration using yum

Install MongoDB2.4.3 in Ubuntu 13.04

MongoDB beginners must read (both concepts and practices)

MongoDB Installation Guide for Ubunu 14.04

MongoDB authoritative Guide (The Definitive Guide) in English [PDF]

Nagios monitoring MongoDB sharded cluster service practice

Build MongoDB Service Based on CentOS 6.5 Operating System

MongoDB details: click here
MongoDB: click here

This article permanently updates the link address:

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.