PG Database Upgrade step description (Pg_dumpall and Pg_upgrade)

Source: Internet
Author: User
Tags postgresql psql

One, database upgrade 9.4 to 9.5


1.pg_dump Import and export (used when data volume is small)

Postgres User Login

$ pg_dumpall > Db_export.out


$ env|grep LANG

Lang=en_us. UTF-8



If the database character set is en_US. UTF-8, the system character set is ZH_CN. UTF-8


The import step is

Postgres User Login

$export Lang=en_us. UTF-8

$psql-F Db_export.out postgres


2.pg_upgrade (Pg_upgrade mode)

Upgrade version: 9.4--"9.5

Upgrade Prerequisites: 9.4 of the database has complete data, 9.5 after the installation is complete only need initdb can


1) Pre-upgrade consistency check


Postgres User Login Execute the following command:

[Email protected] pgdata]# Su-postgres

[Email protected] ~]$ pwd

/home/postgres


Checks are performed using the Pg_upgrade of the 9.5 new version bin path.


[Email protected] ~]$/opt/pg/9.5/bin/pg_upgrade-c-d/pgdata94-d/pgdata-b/opt/pg/9.4/bin-b/opt/pg/9.5/bin

Performing consistency Checks

-----------------------------

Checking Cluster versions OK

Checking database user is the install user OK

Checking Database connection Settings OK

Checking for prepared transactions OK

Checking for reg* system OID user data types OK

Checking for contrib/isn with bigint-passing mismatch OK

Checking for presence of required libraries OK

Checking database user is the install user OK

Checking for prepared transactions OK


*clusters is compatible*

[Email protected] ~]$

2). Perform the upgrade


Upgrade method: Copy data file mode, 9.4 old version Pgdata data copied to 9.5 new version pgdata, slow execution, two data directory independent, save two copies of data;

Hard link, upgrade command line with the--link parameter, do not copy data only establish a connection, execute fast, save only one copy of the data.


(Note: All two databases are in a stopped state when this step is performed)


[Email protected] ~]$/opt/pg/9.5/bin/pg_upgrade-d/pgdata94-d/pgdata-b/opt/pg/9.4/bin-b/opt/pg/9.5/bin

Performing consistency Checks

-----------------------------

Checking Cluster versions OK

Checking database user is the install user OK

Checking Database connection Settings OK

Checking for prepared transactions OK

Checking for reg* system OID user data types OK

Checking for contrib/isn with bigint-passing mismatch OK

Creating dump of global objects OK

Creating Dump of database schemas

Ok

Checking for presence of required libraries OK

Checking database user is the install user OK

Checking for prepared transactions OK


If Pg_upgrade fails after this point, you must re-initdb the

New cluster before continuing.


Performing Upgrade

------------------

Analyzing all rows in the new cluster OK

Freezing all rows on the new cluster OK

Deleting files from new Pg_clog OK

Copying old Pg_clog to new server ok

Setting Next transaction ID and epoch for new cluster OK

Deleting files from new pg_multixact/offsets OK

Copying old pg_multixact/offsets to new server ok

Deleting files from new pg_multixact/members OK

Copying old pg_multixact/members to new server ok

Setting next multixact ID and offset for new cluster OK

Resetting WAL Archives OK

Setting Frozenxid and Minmxid counters in new cluster OK

Restoring global objects in the new cluster OK

Restoring database schemas in the new cluster

Ok

Creating newly-required TOAST Tables OK

Copying User relation files

Ok

Setting next OID for new cluster OK

Sync Data directory to disk OK

Creating script to analyze new cluster OK

Creating script to delete old cluster ok


Upgrade Complete

----------------

Optimizer statistics is not transferred by Pg_upgrade so,

Once you start the new server, consider running:

./analyze_new_cluster.sh


Running This script would delete the old cluster ' s data files:

./delete_old_cluster.sh

[Email protected] ~]$


[email protected] ~]$ LL

Total 1208

-RWX------. 1 Postgres postgres 749 APR 9 18:52 analyze_new_cluster.sh

-RWX------. 1 postgres postgres 9 18:52 delete_old_cluster.sh

[Email protected] ~]$


3). Start the database with the new version startup script


[[Email protected] ~]$ exit

Logout

[[email protected] pgdata]#/etc/init.d/postgresql start

Starting Postgresql:ok

[Email protected] pgdata]# Su-postgres

[Email protected] ~]$ Psql

Psql (9.5.2)

Type ' help ' for help.


postgres=# \l

List of databases

Name | Owner |   Encoding |    Collate |   Ctype | Access Privileges

--------------------+----------+----------+-------------+-------------+-----------------------

a_authentication | Vincent | UTF8 | en_US. UTF-8 | en_US. UTF-8 |

a_resources | Vincent | UTF8 | en_US. UTF-8 | en_US. UTF-8 |

Postgres | Postgres | UTF8 | en_US. UTF-8 | en_US. UTF-8 |

A_server | Postgres | UTF8 | en_US. UTF-8 | en_US. UTF-8 | postgres=ctc/postgres+

|          |             |             | | =tc/postgres +

|          |             |             | | Acent=ctc/postgres

Template0 | Postgres | UTF8 | en_US. UTF-8 | en_US. UTF-8 | =c/postgres +

|          |             |             | | Postgres=ctc/postgres

template1 | Postgres | UTF8 | en_US. UTF-8 | en_US. UTF-8 | postgres=ctc/postgres+

|          |             |             | | =c/postgres

a_security | UCCC | UTF8 | en_US. UTF-8 | en_US. UTF-8 | A=CTC/UCCC +

|          |             |             | | =TC/UCCC +

|          |             |             | | A=ctc/uccc

(7 rows)


postgres=# \q


4). Production of new statistical data


[Email protected] ~]$./analyze_new_cluster.sh

This script would generate minimal optimizer statistics rapidly

So your system was usable, and then gather statistics twice more

With increasing accuracy. When it was done, your system would

The default level of optimizer statistics.


If you have used ALTER TABLE to modify the statistics target for

Any tables, might want to remove them and restore them after

Running this script because they would delay fast statistics generation.


If you would like default statistics as quickly as possible, cancel

This script and run:

"/opt/pg/9.5/bin/vacuumdb"--all--analyze-only


vacuumdb:processing database "a_authentication": Generating minimal Optimizer statistics (1 target)

vacuumdb:processing database "a_resources": Generating minimal Optimizer statistics (1 target)

vacuumdb:processing database "Postgres": Generating minimal Optimizer statistics (1 target)

vacuumdb:processing database "A_server": Generating minimal Optimizer statistics (1 target)

vacuumdb:processing database "template1": Generating minimal Optimizer statistics (1 target)

vacuumdb:processing database "a_security": Generating minimal Optimizer statistics (1 target)

vacuumdb:processing database "a_authentication": Generating medium Optimizer statistics (targets)

vacuumdb:processing database "a_resources": Generating medium Optimizer statistics (targets)

vacuumdb:processing database "Postgres": Generating medium Optimizer statistics (targets)

vacuumdb:processing database "A_server": Generating medium Optimizer statistics (targets)

vacuumdb:processing database "template1": Generating medium Optimizer statistics (targets)

vacuumdb:processing database "a_security": Generating medium Optimizer statistics (targets)

vacuumdb:processing database "a_authentication": Generating Default (FULL) Optimizer statistics

vacuumdb:processing database "a_resources": Generating Default (FULL) Optimizer statistics

vacuumdb:processing database "Postgres": Generating Default (FULL) Optimizer statistics

vacuumdb:processing database "A_server": Generating Default (FULL) Optimizer statistics

vacuumdb:processing database "template1": Generating Default (FULL) Optimizer statistics

vacuumdb:processing database "a_security": Generating Default (FULL) Optimizer statistics


Done

[Email protected] ~]$


5). old version Data cleanup


You can execute the delete_old_cluster.sh script after the new database has been running for a period of time, deleting the old version Pgdata directory

[Email protected] ~]$ more delete_old_cluster.sh

#!/bin/sh


Rm-rf '/pgdata94 '

[Email protected] ~]$


This article is from the "Yiyi" blog, make sure to keep this source http://heyiyi.blog.51cto.com/205455/1762076

PG Database Upgrade step description (Pg_dumpall and Pg_upgrade)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.