PostgreSQL Master Slave升級過程

來源:互聯網
上載者:User

標籤:ha upgrade

1.初始狀態:Master,slave均為running狀態。

2.升級過程

Master

1).關閉 master 記錄最後檢查點位置 (latest checkpoint location),這是宕機時間開始的地方 (This is where your downtime starts)。
postgres使用者執行以下命令:

$ pg_ctl -D $PGDATA stop -m fast

$ pg_controldata  | grep "Latest checkpoint location"

$ Latest checkpoint location:           0/C619840

2).關閉slave 比較最後檢查點

$ pg_ctl -D $PGDATA stop -m fast

$ pg_controldata  | grep "Latest checkpoint location"

$ Latest checkpoint location:           0/C619840


因為兩個檢查點位置一致,我們確認 standby 應用了所有日誌,Master和Slave資料沒有差異.


3).儲存舊版本設定檔
$ cp /u02/pgdata/testmig/postgresql.conf /var/tmp
$ cp /u02/pgdata/testmig/pg_hba.conf /var/tmp
$ cp /u02/pgdata/testmig/postgresql.conf /var/tmp
$ cp /u02/pgdata/testmig/pg_hba.conf /var/tmp


4).Master使用連結方式升級,如果多核伺服器使用“-j”選項,並存執行pg_upgrade

$ export PGDATAOLD=/u02/pgdata/testmig/
$ export PGDATANEW=/u02/pgdata/testmig95/
$ export PGBINOLD=/u01/app/postgres/product/91/db_8/bin/
$ export PGBINNEW=/u01/app/postgres/product/95/db_5/bin/
 
$ /u01/app/postgres/product/95/db_5/bin/pg_upgrade -k
(Usually you’d do a “-c” check run before doing the real upgrade). When using link mode the files get hard-linked instead of copied which is much faster and saves disk space. The downside is that you can not revert to the old cluster in case anything goes wrong. When it goes fine, it looks like this:

Performing Consistency Checks
-----------------------------
Checking cluster versions                                   ok
Checking database user is the install user                  ok
Checking database connection settings                       ok
Checking for prepared transactions                          ok
Checking for reg* system OID user data types                ok
Checking for contrib/isn with bigint-passing mismatch       ok
Checking for invalid "line" user columns                    ok
Creating dump of global objects                             ok
Creating dump of database schemas
                                                            ok
Checking for presence of required libraries                 ok
Checking database user is the install user                  ok
Checking for prepared transactions                          ok

If pg_upgrade fails after this point, you must re-initdb the
new cluster before continuing.

Performing Upgrade
------------------
Analyzing all rows in the new cluster                       ok
Freezing all rows on the new cluster                        ok
Deleting files from new pg_clog                             ok
Copying old pg_clog to new server                           ok
Setting next transaction ID and epoch for new cluster       ok
Deleting files from new pg_multixact/offsets                ok
Setting oldest multixact ID on new cluster                  ok
Resetting WAL archives                                      ok
Setting frozenxid and minmxid counters in new cluster       ok
Restoring global objects in the new cluster                 ok
Restoring database schemas in the new cluster
                                                            ok
Setting minmxid counter in new cluster                      ok
Adding ".old" suffix to old global/pg_control               ok

If you want to start the old cluster, you will need to remove
the ".old" suffix from /u02/pgdata/testmig/global/pg_control.old.
Because "link" mode was used, the old cluster cannot be safely
started once the new cluster has been started.

Linking user relation files
                                                            ok
Setting next OID for new cluster                            ok
Sync data directory to disk                                 ok
Creating script to analyze new cluster                      ok
Creating script to delete old cluster                       ok

Upgrade Complete
----------------
Optimizer statistics are not transferred by pg_upgrade so,
once you start the new server, consider running:
    ./analyze_new_cluster.sh

Running this script will delete the old cluster‘s data files:
    ./delete_old_cluster.sh


5).恢複設定檔到新目錄

$ mkdir -p /u02/pgdata/testmig95/pg_log
$ cp /var/tmp/postgresql.conf /u02/pgdata/testmig95/postgresql.conf 
$ cp /var/tmp/pg_hba.conf /u02/pgdata/testmig95/pg_hba.conf


6).啟動、停止更新的執行個體,檢查記錄檔中一切正常

$ /u01/app/postgres/product/95/db_5/bin/pg_ctl -D /u02/pgdata/testmig95/ -l /u02/pgdata/testmig95/pg_log/log.log start   
$ /u01/app/postgres/product/95/db_5/bin/pg_ctl -D /u02/pgdata/testmig95/ stop 

資料庫叢集現在已經運行,資料庫完整關閉(計劃重建standby)


Slave


1).儲存設定檔
$ cp /u02/pgdata/testmig/postgresql.conf /var/tmp
$ cp /u02/pgdata/testmig/pg_hba.conf /var/tmp
$ cp /u02/pgdata/testmig/recovery.conf /var/tmp

同步master目錄到standby(this will be very fast because it will create hard links on the standby server instead of copying the user files):

$ cd /u02/pgdata  
$ rsync --archive --delete --hard-links --size-only testmig testmig95 192.168.22.33:/u02/pgdata
$ cd /u03
$ rsync -r pgdata/testmig95 192.168.22.33:/u03/pgdata/testmig95

2).standby恢複設定檔
$ cp /var/tmp/postgresql.conf /u02/pgdata/testmig95/postgresql.conf
$ cp /var/tmp/pg_hba.conf /u02/pgdata/testmig95/pg_hba.conf
$ cp /var/tmp/recovery.conf /u02/pgdata/testmig95/recovery.conf

3).啟動master
$ export PATH=/u01/app/postgres/product/95/db_5/bin:$PATH
$ pg_ctl -D /u02/pgdata/testmig95/ start -l /u02/pgdata/testmig95/pg_log/log.log

4).啟動standby
$ export PATH=/u01/app/postgres/product/95/db_5/bin:$PATH
$ pg_ctl -D /u02/pgdata/testmig95/ start -l /u02/pgdata/testmig95/pg_log/log.log

5).檢查standby記錄檔

LOG:  database system was shut down at 2017-01-19 07:51:24 GMT
LOG:  creating missing WAL directory "pg_xlog/archive_status"
LOG:  entering standby mode
LOG:  started streaming WAL from primary at 0/E000000 on timeline 1
LOG:  consistent recovery state reached at 0/E024D38
LOG:  redo starts at 0/E024D38
LOG:  database system is ready to accept read only connections

6).standby其它檢查工作

$ psql
psql (9.5.5)
Type "help" for help.
 
postgres=# select pg_is_in_recovery();
 pg_is_in_recovery
-------------------
 t
(1 row)
 
postgres=# \dx
                        List of installed extensions
   Name    | Version |   Schema   |               Description              
-----------+---------+------------+-----------------------------------------
 adminpack | 1.0     | pg_catalog | administrative functions for PostgreSQL
 plpgsql   | 1.0     | pg_catalog | PL/pgSQL procedural language
(2 rows)
 
postgres=# \c testmig
You are now connected to database "testmig" as user "postgres".
testmig=# \dx
                                       List of installed extensions
      Name      | Version |   Schema   |                            Description                           
----------------+---------+------------+-------------------------------------------------------------------
 pg_buffercache | 1.0     | public     | examine the shared buffer cache
 pg_trgm        | 1.0     | public     | text similarity measurement and index searching based on trigrams
 plpgsql        | 1.0     | pg_catalog | PL/pgSQL procedural language
(3 rows)
 
testmig=# \d
              List of relations
 Schema |       Name       | Type  |  Owner  
--------+------------------+-------+----------
 public | pg_buffercache   | view  | postgres
 public | pgbench_accounts | table | postgres
 public | pgbench_branches | table | postgres
 public | pgbench_history  | table | postgres
 public | pgbench_tellers  | table | postgres
(5 rows)
 
testmig=# select count(*) from pgbench_accounts;
  count 
---------
 1000000
(1 row)


7).master運行analyze_new_cluster.sh


$ ./analyze_new_cluster.sh
This script will generate minimal optimizer statistics rapidly
so your system is usable, and then gather statistics twice more
with increasing accuracy.  When it is done, your system will
have the default level of optimizer statistics.
 
If you have used ALTER TABLE to modify the statistics target for
any tables, you might want to remove them and restore them after
running this script because they will delay fast statistics generation.
 
If you would like default statistics as quickly as possible, cancel
this script and run:
    "/u01/app/postgres/product/95/db_5/bin/vacuumdb" --all --analyze-only
 
vacuumdb: processing database "postgres": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "template1": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "testmig": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "postgres": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "template1": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "testmig": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "postgres": Generating default (full) optimizer statistics
vacuumdb: processing database "template1": Generating default (full) optimizer statistics
vacuumdb: processing database "testmig": Generating default (full) optimizer statistics


8).master刪除舊的叢集

$ ./delete_old_cluster.sh

複製指令碼到 standby 或者手工刪除舊的 standby
$ rm -rf /u02/pgdata/testmig
$ rm -rf /u03/pgdata/testmig


本文出自 “yiyi” 部落格,請務必保留此出處http://heyiyi.blog.51cto.com/205455/1917415

PostgreSQL Master Slave升級過程

相關文章

聯繫我們

該頁面正文內容均來源於網絡整理,並不代表阿里雲官方的觀點,該頁面所提到的產品和服務也與阿里云無關,如果該頁面內容對您造成了困擾,歡迎寫郵件給我們,收到郵件我們將在5個工作日內處理。

如果您發現本社區中有涉嫌抄襲的內容,歡迎發送郵件至: info-contact@alibabacloud.com 進行舉報並提供相關證據,工作人員會在 5 個工作天內聯絡您,一經查實,本站將立刻刪除涉嫌侵權內容。

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.