PostgreSQL Master Slave upgrade process

Source: Internet
Author: User
Tags postgresql psql rsync

1. Initial state: Master,slave are running states.

2. Upgrade process

Master

1). Turn off the master record last checkpoint location (latest checkpoint locations), which is where the downtime begins (this is where your downtime starts).
The Postgres user executes the following command:

$ pg_ctl-d $PGDATA stop-m Fast

$ pg_controldata | grep "Latest Checkpoint location"

$ Latest Checkpoint location:0/c619840

2). Close slave Compare last checkpoint

$ pg_ctl-d $PGDATA stop-m Fast

$ pg_controldata | grep "Latest Checkpoint location"

$ Latest Checkpoint location:0/c619840


Since the two checkpoint locations are consistent, we confirm that standby has all the logs applied, and that there is no difference between master and slave data.


3). Save the old version configuration file
$ cp/u02/pgdata/testmig/postgresql.conf/var/tmp
$ cp/u02/pgdata/testmig/pg_hba.conf/var/tmp
$ cp/u02/pgdata/testmig/postgresql.conf/var/tmp
$ cp/u02/pgdata/testmig/pg_hba.conf/var/tmp


4). Master uses a link upgrade if the multi-core server uses the "-j" option to perform pg_upgrade in parallel

$ Export pgdataold=/u02/pgdata/testmig/
$ Export pgdatanew=/u02/pgdata/testmig95/
$ Export pgbinold=/u01/app/postgres/product/91/db_8/bin/
$ Export pgbinnew=/u01/app/postgres/product/95/db_5/bin/

$/u01/app/postgres/product/95/db_5/bin/pg_upgrade-k
(usually you ' d do a "-C" check run before doing the real upgrade). When using link mode the files get hard-linked instead of copied which is much faster and saves disk space. The downside is so you can not revert to the old cluster in case anything goes wrong. When it goes fine, it looks like this:

Performing consistency Checks
-----------------------------
Checking Cluster versions OK
Checking database user is the install user OK
Checking Database connection Settings OK
Checking for prepared transactions OK
Checking for reg* system OID user data types OK
Checking for contrib/isn with bigint-passing mismatch OK
Checking for invalid ' line ' user columns OK
Creating dump of global objects OK
Creating Dump of database schemas
Ok
Checking for presence of required libraries OK
Checking database user is the install user OK
Checking for prepared transactions OK

If Pg_upgrade fails after this point, you must re-initdb the
New cluster before continuing.

Performing Upgrade
------------------
Analyzing all rows in the new cluster OK
Freezing all rows on the new cluster OK
Deleting files from new Pg_clog OK
Copying old Pg_clog to new server ok
Setting Next transaction ID and epoch for new cluster OK
Deleting files from new pg_multixact/offsets OK
Setting oldest multixact ID on new cluster OK
Resetting WAL Archives OK
Setting Frozenxid and Minmxid counters in new cluster OK
Restoring global objects in the new cluster OK
Restoring database schemas in the new cluster
Ok
Setting MINMXID counter in new cluster OK
Adding '. old ' suffix to old global/pg_control ok

If you want to start the old cluster and you'll need to remove
The ". old" suffix from/u02/pgdata/testmig/global/pg_control.old.
Because "link" mode is used, the old cluster cannot is safely
Started once the new cluster has been started.

Linking User relation files
Ok
Setting next OID for new cluster OK
Sync Data directory to disk OK
Creating script to analyze new cluster OK
Creating script to delete old cluster ok

Upgrade Complete
----------------
Optimizer statistics is not transferred by Pg_upgrade so,
Once you start the new server, consider running:
./analyze_new_cluster.sh

Running This script would delete the old cluster ' s data files:
./delete_old_cluster.sh


5). Restore the configuration file to the new directory

$ mkdir-p/u02/pgdata/testmig95/pg_log
$ cp/var/tmp/postgresql.conf/u02/pgdata/testmig95/postgresql.conf
$ cp/var/tmp/pg_hba.conf/u02/pgdata/testmig95/pg_hba.conf


6). Start, stop the updated instance, check the log file for all the normal

$/u01/app/postgres/product/95/db_5/bin/pg_ctl-d/u02/pgdata/testmig95/-l/u02/pgdata/testmig95/pg_log/log.log Start
$/u01/app/postgres/product/95/db_5/bin/pg_ctl-d/u02/pgdata/testmig95/stop

The DB cluster is now running and the database is completely shut down (plan to rebuild standby)


Slave


1). Save the configuration file
$ cp/u02/pgdata/testmig/postgresql.conf/var/tmp
$ cp/u02/pgdata/testmig/pg_hba.conf/var/tmp
$ cp/u02/pgdata/testmig/recovery.conf/var/tmp

Synchronize the master directory to standby (this would be very fast because it would create hard links on the standby server instead of copying the U Ser files):

$ cd/u02/pgdata
$ rsync--archive--delete--hard-links--size-only testmig testmig95 192.168.22.33:/u02/pgdata
$ cd/u03
$ rsync-r pgdata/testmig95 192.168.22.33:/u03/pgdata/testmig95

2). Standby Restore Configuration file
$ cp/var/tmp/postgresql.conf/u02/pgdata/testmig95/postgresql.conf
$ cp/var/tmp/pg_hba.conf/u02/pgdata/testmig95/pg_hba.conf
$ cp/var/tmp/recovery.conf/u02/pgdata/testmig95/recovery.conf

3). Start Master
$ export Path=/u01/app/postgres/product/95/db_5/bin: $PATH
$ pg_ctl-d/u02/pgdata/testmig95/start-l/u02/pgdata/testmig95/pg_log/log.log

4). Start Standby
$ export Path=/u01/app/postgres/product/95/db_5/bin: $PATH
$ pg_ctl-d/u02/pgdata/testmig95/start-l/u02/pgdata/testmig95/pg_log/log.log

5). Check the standby log file

Log:database system is shut down at 2017-01-19 07:51:24 GMT
log:creating missing WAL directory "Pg_xlog/archive_status"
log:entering Standby mode
Log:started streaming WAL from primary to 0/e000000 on timeline 1
Log:consistent recovery state reached at 0/E024D38
Log:redo starts at 0/e024d38
Log:database system is ready for accept read Only connections

6). Standby Other inspection work

$ psql
Psql (9.5.5)
Type ' help ' for help.

postgres=# select Pg_is_in_recovery ();
Pg_is_in_recovery
-------------------
T
(1 row)

postgres=# \DX
List of installed extensions
Name |   Version |               Schema | Description
-----------+---------+------------+-----------------------------------------
Adminpack | 1.0 | Pg_catalog | Administrative functions for PostgreSQL
Plpgsql | 1.0 | Pg_catalog | Pl/pgsql Procedural language
(2 rows)

postgres=# \c Testmig
Connected to Database "Testmig" as User "Postgres".
testmig=# \DX
List of installed extensions
Name |   Version |                            Schema | Description
----------------+---------+------------+-------------------------------------------------------------------
Pg_buffercache | 1.0 | Public | Examine the shared buffer cache
PG_TRGM | 1.0 | Public | Text similarity measurement and index searching based on Trigrams
Plpgsql | 1.0 | Pg_catalog | Pl/pgsql Procedural language
(3 rows)

testmig=# \d
List of relations
Schema | Name |  Type | Owner
--------+------------------+-------+----------
Public | Pg_buffercache | View | Postgres
Public | pgbench_accounts | Table | Postgres
Public | pgbench_branches | Table | Postgres
Public | Pgbench_history | Table | Postgres
Public | Pgbench_tellers | Table | Postgres
(5 rows)

testmig=# Select COUNT (*) from pgbench_accounts;
Count
---------
1000000
(1 row)


7). Master Run analyze_new_cluster.sh


$./analyze_new_cluster.sh
This script would generate minimal optimizer statistics rapidly
So your system was usable, and then gather statistics twice more
With increasing accuracy. When it was done, your system would
The default level of optimizer statistics.

If you have used ALTER TABLE to modify the statistics target for
Any tables, might want to remove them and restore them after
Running this script because they would delay fast statistics generation.

If you would like default statistics as quickly as possible, cancel
This script and run:
"/u01/app/postgres/product/95/db_5/bin/vacuumdb"--all--analyze-only

vacuumdb:processing database "Postgres": Generating minimal Optimizer statistics (1 target)
vacuumdb:processing database "template1": Generating minimal Optimizer statistics (1 target)
vacuumdb:processing database "Testmig": Generating minimal Optimizer statistics (1 target)
vacuumdb:processing database "Postgres": Generating medium Optimizer statistics (targets)
vacuumdb:processing database "template1": Generating medium Optimizer statistics (targets)
vacuumdb:processing database "Testmig": Generating medium Optimizer statistics (targets)
vacuumdb:processing database "Postgres": Generating Default (FULL) Optimizer statistics
vacuumdb:processing database "template1": Generating Default (FULL) Optimizer statistics
vacuumdb:processing database "Testmig": Generating Default (FULL) Optimizer statistics


8). Master to delete the old cluster

$./delete_old_cluster.sh

Copy the script to standby or manually delete the old standby
$ rm-rf/u02/pgdata/testmig
$ rm-rf/u03/pgdata/testmig


This article is from the "Yiyi" blog, make sure to keep this source http://heyiyi.blog.51cto.com/205455/1917415

PostgreSQL Master Slave upgrade process

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.