The product is about to go online, but the project manager said that the database needs to be changed, and the contents of the mysql database must be moved to Postgresql.
There is a python tool to implement migration (but stored procedures cannot be migrated, data types cannot be flexibly corresponded, and procedures need to be changed). In order to save time, manual migration is used. I will gossip. .
Use Navicat for Mysql tool (other tools can also, such as mysqldump) to export the database schema (only the table structure does not contain data) method is omitted, the generated file name is mysql.sql
Use the Linux sed tool to replace all COMMENT and mysql unique elements in the exported script, and generate a new file table_new.sql, see the following script
cat mytab.sql | sed 's /^.* ENGINE =. * $ /); / g' | sed 's / COMMENT. * $ /, / g' | sed 's / `// g'> table_new. sql
Generate a separate file for all INDEX statements (this step can also be omitted, there are other methods later)
cat table_new.sql | egrep ‘\-| ^ \ s * KEY | ^ \ s * UNIQUE’> key.sql
Generate a separate file for all foreign key statements (this step can be omitted, there are other methods later)
cat table_new.sql | egrep ‘\-| ^ \ s * CONSTRAINT’> constraint.sql
Remove the KEY and CONSTRAINT statements and generate a new file table_last.sql
cat table_new.sql | sed 's / ^ \ s * KEY. * $ //' | sed 's / ^ \ s * CONSTRAINT. * $ //' | sed 's / ^ \ s * UNIQUE. * $ / / '| sed' / ^ $ / d '> table_last.sql
Convert newlines
unix2dos table_last.sql
COPY the file into WINDOWS, and use the Notepad tool (or other tools) to find and replace some special values (the meaning of this step is to map the type in mysql to the type in postgresql), such as:
Find ID int (12) NOT NULL AUTO_INCREMENT Replace ID bigserial NOT NULL (Self-incrementing type field)
Find datetime instead of timestamp
...
Rearrange some special tables, because all the KEY and CONSTRAINT statements are removed in step 5, so if the table fields include fields that begin with KEY or CONSTRAINT (such as the fields KEYCD, KEYTTL, etc.), these special tables must The table creation statement is changed by hand, and the original statement is replaced in table_last.sql.
Execute the table_last.sql statement in Postgresql (the method is omitted), and fine-tune if there are errors, until all tables are successfully created.
Import data (export in mysql first, replace the newline characters in the exported file or report an error, and then import into postgresql)
Export statement (execute the following statement in MYSQL and copy the results and execute again, then the data file for each table will be generated in the server directory. The tool I use is Navicat for Mysql. Note that replace table_schema, which is generated here The directory of the data file is: / usr / local / mysql / outfile /, or you can change to another directory)
SELECT CONCAT (‘select * from‘,
table_name,
"into outfile‘ / usr / local / mysql / outfile / ",
table_name,
‘.Dat’, "‘ "
"fields terminated by‘ | ‘;")
FROM information_schema.tables
WHERE table_schema = ‘DEVELOP’;
COPY the generated file to the directory of the postgresql server after replacing the newline character
sed -i ‘s / \ r // g’ outfile / *
Import statement (execute in postgresql, copy the query results after getting the query results, and execute again. Note that it must be executed by a user with superuser privileges. The pgadmin tool is used here, and the directory is / tmp / data. Note that you need to replace Value, my data name is DEVELOP)
select ‘copy necsl.’ || table_name || ‘from‘ || chr (39) || ’/ tmp / data /’
|| upper (table_name) ||
‘.Dat’ || chr (39) ||
‘With DELIMITER‘ || chr (39) || ‘|‘ || chr (39) || ’;’
from information_schema.tables
where table_schema = ‘necsl’ and
table_catalog = ‘DEVELOP’;
After the data is imported, you can use the following methods to verify whether the import is correct
-Create the following tables in mysql and PG respectively
create table table_count (tblname varchar (100) primary key not null, tblrecorder integer);
/ * Execute the following statement in mysql, and copy the results and execute again, it will generate how many records for each table in the table table_count * /
SELECT CONCAT (‘insert into table_count select‘, “‘ ”, table_name,“ ‘”, ’, count (*) from‘,
table_name, ";")
FROM information_schema.tables
WHERE table_schema = ‘DEVELOP’;
-Execute the following statement in postgresql, and then execute the result after copying it, it will generate how many records for each table in the table table_count, and then compare with the table_count table in mysql
select ‘insert into necsl.table_count select‘ || quote_literal (table_name) || ’, count (*) from necsl.’ || table_name || ‘;’
from information_schema.tables
where table_schema = ‘necsl’ and
table_catalog = ‘DEVELOP’;
--If there is a problem with the import, you can execute the following sentence in the PG to generate a statement to clear all tables, copy the execution results and then execute to clear all tables
PG truncate
select ‘truncate table necsl.’ || table_name || ‘;’
from information_schema.tables
where table_schema = ‘necsl’ and
table_catalog = ‘DEVELOP’;
/ * Generate index information and foreign keys, execute the following statements in mysql, and copy the results to POSTGRESQL to execute * /
/ * Generate unique index * /
select
CONCAT (
‘CREATE UNIQUE INDEX’,
table_name, ‘_ IDX _’, index_name,
‘ON‘,
table_name,
‘(‘,
GROUP_CONCAT (column_name order by seq_in_index),
‘);’
)
from statistics
where table_schema = ‘DEVELOP’
AND INDEX_NAME <> ‘PRIMARY’
AND INDEX_SCHEMA = ‘DEVELOP’
AND NON_UNIQUE = 0
GROUP BY index_name, table_name
ORDER BY TABLE_NAME, INDEX_NAME, seq_in_index asc
/ * Generate btree index * /
select
CONCAT (
‘CREATE INDEX‘,
table_name, ‘_ IDX _’, index_name,
‘ON‘,
table_name,
‘(‘,
GROUP_CONCAT (column_name order by seq_in_index),
‘);’
)
from statistics
where table_schema = ‘DEVELOP’
AND INDEX_NAME <> ‘PRIMARY’
AND INDEX_SCHEMA = ‘DEVELOP’
AND NON_UNIQUE = 1
GROUP BY index_name, table_name
ORDER BY TABLE_NAME, INDEX_NAME, seq_in_index asc
/ * Generate foreign key * /
select
concat (‘alter table‘,
c.TABLE_NAME,
‘Add constraint‘,
c. CONSTRAINT_NAME,
‘Foreign key (’,
c.COLUMN_NAME,
‘) References‘,
c.REFERENCED_TABLE_NAME,
‘(‘,
c.REFERENCED_COLUMN_NAME,
‘);’)
from TABLE_CONSTRAINTS t, KEY_COLUMN_USAGE c
where t.CONSTRAINT_SCHEMA = ‘DEVELOP’
AND t.CONSTRAINT_TYPE = ‘FOREIGN KEY’
AND t.TABLE_SCHEMA = ‘DEVELOP’
AND c.REFERENCED_TABLE_SCHEMA = ‘DEVELOP’
AND t.CONSTRAINT_NAME = c.CONSTRAINT_NAME
and t.table_name = c.table_name;
After generating the above information, set up the following function in PG to reset all sequences (if there is no auto-increment type, you can ignore this step), and put it in public mode
CREATE OR REPLACE FUNCTION pro_resetallseq ()
RETURNS void AS
$ BODY $
DECLARE
tmp VARCHAR (512);
maxval bigint;
stmt record;
BEGIN
FOR stmt IN select sequence_name, REPLACE (sequence_name, ‘_ id_seq’, ’‘) as tnm from information_schema.sequences where
sequence_catalog = ‘DEVELOP’ and sequence_schema = ‘necsl’ LOOP
tmp: = ‘SELECT MAX (ID) FROM‘ || quote_ident (stmt.tnm) || ‘;’;
EXECUTE tmp into maxval;
if not (maxval is NULL or maxval = 0) then
EXECUTE ‘SELECT SETVAL (‘ || quote_literal (stmt.sequence_name) || ‘,’ || maxval || ‘);’;
end if;
END LOOP;
RAISE NOTICE ‘finished .....’;
END;
$ BODY $
LANGUAGE plpgsql
After the establishment is successful, you can reset all sequences with the following statement
select pro_resetallseq ();
About functions (stored procedures) in mysql can only be rewritten in PG because the syntax is quite different. I didn't find a great way.
Because the time is short, there is no writing tool, and the process is relatively rough, for reference only. If you have good suggestions, please leave a message, thank you for reading.
This article is from the "Mirror" blog, please keep this source http://383133430.blog.51cto.com/454215/1686169
MYSQL database to POSTGRESQL database (manual migration)