Pg_dump instance details (backing up PostgreSQL and Greenplum databases)

Source: Internet
Author: User
Tags psql

first, the use of Pg_dump:
Import and export of databases is one of the most commonly used features, each of which provides tools for this, such as Oracle's Exp/imp,informix Dbexp/dbimp,mysql mysqldump, and PostgreSQL provides a corresponding tool for Pg_ Dump and Pg_restore.
Pg_dump is a tool for backing up the PostgreSQL database. It can make a full and consistent backup while the database is in use, and does not block access to the database by other users.
The dump format can be a script or an archive file. The format of the dump script is plain text and contains many SQL commands that can be used to rebuild the database and restore it to the state when the script was saved. You can use Psql to recover from such a script. They can even be used to rebuild databases on other machines and even other hardware systems, by making some modifications to the script and even rebuilding the database on other SQL database products.
The archive file format must be used with Pg_restore to rebuild the database. They allow Pg_restore to choose what to restore, or even reorder the items that need to be recovered before resuming. Archive files can also be ported across platforms.
D:\Program Files\powercmd>pg_dump--help
Pg_dump dumps a database into plain text files or other formats.
Usage: pg_dump [options] ... [Database name]
General Options:
-F,--file=filename output file or directory name
-F,--format=c|d|t|p output file format (custom, directory, tar, plain text)
-V,--verbose verbose mode
-Z,--compress=0-9 compression level in compressed format
--lock-wait-timeout=timeout operation failed after waiting for table lock timeout
--HELP Display this help message, and then exit
--versoin output version information, and then exit
Control output Content options:
-A,--data-only only dumps data, not including mode
-B,--blobs include large objects in the dump
-C,--clean Clear (delete) database objects before re-creation
-C,--create includes commands in the dump to create the database
-E,--encoding=encoding dumps data encoded in encoding form
-N,--schema=schema only dumps the pattern of the specified name
-N,--exclude-schema=schema does not dump the named pattern
-O,--oids include OID in dump
-O,--no-owner in clear text format, ignores the owner of the recovery object
-S,--schema-only only dump mode, excluding data
-S,--superuser=name in the dump, the specified super user name
-T,--table=table only dump tables with the specified name
-T,--exclude-table=table only dump tables with the specified name
-X,--no-privileges do not dump permissions (Grant/revoke)
--binary-upgrade can only be used by the upgrade tool
--column-inserts dump data as an insert command with a column name
--disable-dollar-quoting Canceling dollar (sign) quotes, using SQL standard quotes
--disable-triggers disabling triggers during data recovery only
--inserts dump data as an insert command, not as a copy command
--no-security-labels do not dump security label assignments
--no-tablespaces No dump table space allocation information
--no-unlogged-table-data do not dump unlogged table data
--quote-all-identifiers quote All identifiers, even if not key words
--serializable-deferrable wait until the dump can run without anomalies
--use-set-session-authorization
Use the SESSION AUTHORIZATION command instead of the alter OWNER command to set ownership
Join options:
-H, host name or socket directory for the--host= hostname database server
-P,--port= port number of the database server
-U,--username= names are joined with the specified database user
-W,--no-password never prompt to enter a password
-W,--password mandatory password prompt (automatic)
--role=rolename do SET role before dump
If the database name is not provided, the value of the PGDATABASE environment variable is used.

Second, the use of pg_dump examples
1. Creation of two databases
CREATE DATABASE "TESTDB1"
with OWNER = "TestRole1"
ENCODING = ' UTF8 '
Tablespace = "TESTTBS1";
CREATE DATABASE "TestDb2"
with OWNER = "TestRole1"
ENCODING = ' UTF8 '
Tablespace = "TESTTBS1";
Create a table Csm_bill, cfg_public_int_transport insert several records in TESTDB1, and create an index that uses the index tablespace testtbsindex.

2. Only the database structure is migrated:
E:\>pg_dump-u testrole1-s-F testdb1.sql TestDb1
Password:
-U TestRole1 and Super User-U postgres results exactly the same:
E:\>pg_dump-u postgres-s-F testdb11.sql TestDb1
E:\>psql-u testrole2-f testdb1.sql TestDb2 >a.txt 2>&1
Password for user TestRole2:
When importing, using-u TestRole2 often have a lot of permissions enough to successfully import the owner of the related database object, so it is best to use Super User-U postgres:
E:\>psql-u postgres-f testdb1.sql TestDb2 >a.txt 2>&1
Do not dump permissions option:-X
E:\>pg_dump-u postgres-x-s-f Testdb12.sql TestDb1
Testdb12.sql a few lines less than Testdb1.sql:


In order to be able to run testdb1.sql multiple times, you can start with two lines in the file:
Drop schema public cascade;
Create schema public;
or use the-C option:
E:\>pg_dump-u postgres-c-x-s-F testdb13.sql TestDb1
Testdb13.sql the following lines more than Testdb1.sql:

At this point, you can run multiple times:
E:\>psql-u postgres-f testdb13.sql TestDb2 >a.txt 2>&1
However, if two libraries have different tables or indexes, the first method should be used because the second method will error if some database objects are not found.

3. Migrating database structure and data (can realize database backup and recovery)
Copy the data using the Copy command:
E:\>pg_dump-u Postgres Testdb1>testdb14.sql

The data is copied using the INSERT statement:
E:\>pg_dump-u Postgres--column-inserts Testdb1>testdb15.sql

4. Migrating the CPOST database structure on PostgreSQL on the remote Linux to the local PostgreSQL
(1) Build an identical environment in the local area
Create user "Cpost" Inherit createdb;
Create Tablespace "Pis_data" owner Cpost location ' E:\postgresql/data/pis_data ';
Create Tablespace "Pis_index" owner Cpost location ' E:\postgresql/data/pis_index ';
The remote database Cpost still uses the default tablespace:
CREATE DATABASE Cpost
With OWNER = Cpost
--encoding = ' LATIN9 '
Tablespace = Pg_default
--lc_collate = ' C '
--lc_ctype = ' C '
CONNECTION LIMIT =-1;
Using the above three parameters error, the completed database as follows:
CREATE DATABASE Cpost
With OWNER = Cpost
ENCODING = ' UTF8 '
Tablespace = Pg_default
Lc_collate = ' Chinese (Simplified) _people ' s Republic of china.936 '
Lc_ctype = ' Chinese (Simplified) _people ' s Republic of china.936 '
CONNECTION LIMIT =-1;
(2) using Pg_dump to migrate table structure
With the-h option, the exported SQL file is stored directly locally:
E:\>pg_dump-h 132.10.10.11-p 1234-u cpost-x-s-f cpost.sql cpost
E:\>psql-u postgres-f Cpost.sql
The import was successful, but an error was reported:
Psql:cpost.sql:22:error:character 0xe99499 of encoding "UTF8" have no equivalent in "LATIN9"
Character set errors, character set issues see my other blog: The area of PostgreSQL and the character set.

third, using pg_dump and Pg_restore to achieve database backup and recovery
E:\>pg_restore--help
Pg_restore recovers a PostgreSQL database created by Pg_dump from an archive.
Usage:
Pg_restore [Options] ... FileName
General Options:
-D,--dbname= name connection database name
-F,--file= filename output file name
-F,--format=c|d|t backup file format (should be automatic)
-L,--list Overview of the TOC for print archive files
-V,--verbose verbose mode
--HELP Display this help message, and then exit
--version output version information, and then exit recovery control options:
-A,--data-only only recovers data, not including mode
-C,--clean need to clear (delete) database objects before recreating database objects
-C,--create creating the target database
-E,--exit-on-error error exits, default to continue
-I,--index= name restores the index of the specified name
-j,--jobs=num can perform multiple tasks in parallel to restore work
-L,--use-list= file name sort output from this file using the specified table of contents
-N,--schema=name only restores objects in this mode
-O,--no-owner ignores the owner of the recovery object
-P,--function= name (parameter) function to restore the specified name
-S,--schema-only only recovery mode, excluding data
-S,--superuser=name disable the trigger using the specified super user
-T,--table=name restores the table of the specified word
-T,--trigger=name restores the trigger for the specified word
-X,--no-privileges skip Recovery of processing permissions (Grant/revoke)
-1,--single-transaction as a single transaction recovery
--disable-triggers disabling triggers during data recovery only
--no-data-for-failed-tables does not recover data that cannot create tables
--no-security-labels do not restore security labels
--no-tablespaces does not restore the allocation information for a table space
--use-set-session-authorization to set object ownership by using the session Authorization command instead of the alter owner command
Join options:
-H, host name or socket directory for the--host= hostname database server
-P,--port= port number of the database server
-U,--username= names are joined with the specified database user
-W,--no-password never prompt to enter a password
-W,--password mandatory password prompt (automatic)
--role=rolename perform a set role operation before recovery
If no input file name is provided, standard input is used.
1. Backup and restore using dump format:
E:\>pg_dump-u POSTGRES-FC TestDb1 >testdb1.dump
postgres=# drop Database "TESTDB2";
DROP DATABASE
postgres=# CREATE DATABASE "TESTDB2"
postgres-# with owner= "TestRole2"
postgres-# tablespace= "TESTTBS2";
CREATE DATABASE
E:\>pg_restore-u postgres-d TestDb2 testdb1.dump >a.txt 2>&1
2. Backup and restore using the TAR format:
E:\>pg_dump-u postgres-ft Testdb1>testdb1.tar
postgres=# drop Database "TESTDB2";
DROP DATABASE
postgres=# CREATE DATABASE "TESTDB2"
postgres-# with owner= "TestRole2"
postgres-# tablespace= "TESTTBS2";
CREATE DATABASE
E:\>pg_restore-u postgres-d TestDb2 Testdb1.tar >a.txt 2>&1

Pg_dump instance details (backing up PostgreSQL and Greenplum databases)

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.