Migrate a MySQL database to PostgreSQL

Source: Internet
Author: User
Tags psql

I checked a lot of information and tried some. The final method is as follows:

1. Export mysql table definitions (no data)
Mysqldump -- no-data [dbname]> dbdef. SQL

2. Use mysql2postgres to convert the script to pgsql

3. The script generated above is not necessarily perfect. You can try to import pgsql, debug the error, and manually modify it. I have only one problem. The zerofill In the mysql column definition needs to be removed manually. Some unsinged definitions generate constraint, which can be removed if not needed. In addition, all triggers have problems and can only be manually rebuilt later.

4. Export mysql Data:
Mysqldump-v-nt -- complete-insert = TRUE -- compact -- no-create-info -- skip-quote-names [dbname]> dbdata. SQL
In earlier versions of pgsql, if batch insert is not supported, -- extended-insert = FALSE must be added. This causes a huge performance loss.

5. escape characters
The '\' in the mysql Default string is an escape character, while pgsql does not by default. Modify postgresql. conf:
Backslash_quote = on
Escape_string_warning = off
Standard_conforming_strings = off
You can change the default value after the data is imported.

5. Import table definitions and data in pgsql
Psql-d [dbname] <dbdef. SQL
Psql-d [dbname] <dbdata. SQL

6. Rebuild the trigger

7. Processing of auto-incrementing primary keys (fields)
Because this field has a value when importing data, seq in pgsql does not increase. You can use the following statement to set the current value of the auto-incrementing column:
SELECT setval ('sample _ id_seq ', max (id) from sample;

Finally, if there is a large amount of data, you can first remove the primary key, index, and constraints when considering the performance during the import, and add them after the import is complete. In addition, the psql client does not seem to be able to import data from the pipeline quickly enough. You can use the psql-h localhost method in tcp mode. There are also some parameters optimized for importing large data volumes, which are listed below:

Autovacuum = off

Wal_level = minimal

Archive_mode = off

Full_page_writes = off

Fsync = off

Checkpoint_segments = 50
Checkpoint_timeout = 1 h

Maintenance_work_mem should be as large as possible based on memory Conditions


Author RuralHunter

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.