May 4-May 24, intermittent tossing and greenplum for three weeks, finally came to an end: the expansion, found that the expansion of capacity, had to upgrade; upgrade, found a bunch of errors, had to pause the repair database; fixed, continue to upgrade; Finally completed the expansion. This process is written in 8 blog post to the implementation process and problem solving are recorded, here to summarize the whole process. Originally had to write, but home outside a pile of things, dragged to today just find an empty to pen.
The initial cause is that the computing power and storage capabilities of the GP cluster are almost up to the limit, so expand the segment host. The cluster included two types of hardware HP DL380 G8 and IBM X3650 M4 7915nwz,os are Rhel 6.2 64bit,gp version Greenplum Database 4.2.7.2 build 1 (PostgreSQL 8.2.15 ), the server that intends to expand is HP DL380 G9. Here's the problem: the minimum version of Rhel supported by the HP DL380 G9 is 6.5, and the hard-mounted 6.2 doesn't even recognize the raid, so the server in the cluster has only two OS, but, more troubling, GP4.2.7.2 does not support Rhel 6.5, that is, not tested on 6.5, hard-mounted may be, or not.
After a comprehensive consideration, finally decided to upgrade the GP to the latest version, the actual operation steps see the following two blog posts:
- Greenplum Database Upgrade Practice (top)
- Greenplum Database Upgrade Practice (bottom)
Then in the expansion, the actual procedure is described in the following two blog posts:
- Greenplum Database Expansion Practice (UP)-Preparation work
- Greenplum Database Expansion Practice (bottom)-implementation closure
In this process, countless problems burst out, the main records are as follows:
- How to solve Greenplum's gpcheckcat about persistent errors
- How to solve the problem of lack of distribution policy when Greenplum Pg_dump backup
- How to troubleshoot metadata errors that cannot be repaired by standard commands in Greenplum
- How to resolve Greenplum Master node inconsistent with SEG node metadata
Wipe, incredibly this is finished, it seems that there are not many things ah, but really a piece of the time to solve, really drip very painful!
Three weeks spent with Greenplum