Product will be on-line, the project manager said to change the database, to the MySQL database contents all moved to PostgreSQL.
There is a Python tool can be migrated (but the stored procedures cannot be migrated, the data type can not be flexible correspondence, but also to change the program), in order to save time by manual migration way, gossip less, the following is the migration process, no diagram, sorry.
- Use the Navicat for MySQL tool (other tools can also, such as mysqldump) to export the database schema (only the table structure does not contain data) method slightly, the resulting file name is Mysql.sql
- Replace all comment and MySQL's unique primitives in the exported script with the Linux SED tool and generate a new file Table_new.sql, see the following script
-
Cat Mytab.sql | Sed ' s/^.*engine=.*$/);/g ' | Sed ' s/comment.*$/,/g ' | Sed ' s/'//g ' > Table_new.sql
- Generate a separate file for all index statements (this step can also be omitted, followed by other methods)
-
Cat Table_new.sql | Egrep ' \-|^\s*key|^\s*unique ' > Key.sql
- Generate a separate file for all foreign key statements (this step can be omitted, followed by other methods)
-
Cat Table_new.sql | Egrep ' \-|^\s*constraint ' > Constraint.sql
- Remove the key and constraint statements and generate a new file Table_last.sql
-
Cat Table_new.sql | Sed ' s/^\s*key.*$//' | Sed ' s/^\s*constraint.*$//' | Sed ' s/^\s*unique.*$//' | Sed '/^$/d ' > Table_last.sql
- Convert line breaks
-
Unix2dos Table_last.sql
- Copy the file to Windows and use the Notepad tool (or other tool) to find and replace some special values (this step means to match the type in MySQL to the type in PostgreSQL), such as:
-
Lookup ID Int (a) NOT NULL Auto_increment replacement ID bigserial NOT NULL (self-growing type field)
Find datetime replacement Timestamp
......
- Rearrange some special tables, because all key and constraint statements are removed in the 5th step, so if the table fields include the beginning of a key or a field beginning with constraint (such as a field keycd,keyttl, etc.), you will make a manual change to the statement of these special tables. , and replace the original statement in Table_last.sql.
- Execute the TABLE_LAST.SQL statement in PostgreSQL (method slightly), if there are errors, fine-tune until all tables are established successfully.
-
Import data (first exported in MySQL, the exported file to replace the newline character or error, and then import to PostgreSQL)
- Export Statement (execute the following statement in Mysql and copy the results again, the data files in each table are generated in the server's directory, and the tool I use is navicat for Mysql.) Note Replace Table_schema, where the directory where the data files are generated is:/usr/local/mysql/outfile/, can also be replaced with other directories)
-
SELECT CONCAT(‘select * from ‘,
table_name,
" into outfile ‘/usr/local/mysql/outfile/",
table_name ,
‘.dat‘ ,"‘"
" fields terminated by ‘|‘ ;")
FROM information_schema.tables
WHERE table_schema=‘DEVELOP‘;
- Copy the generated file to the PostgreSQL server directory after replacing the line break
-
Sed-i ' s/\r//g ' outfile/*
- Import statement (execute in PostgreSQL, get query results after copy query results, and execute again, note must be executed with Superuser permission, here is the Pgadmin tool, directory is/tmp/data, pay attention to replace Table_ Value in catalog, my data is named develop)
-
select ‘copy necsl.‘||table_name|| ‘ from ‘ || chr(39)||‘/tmp/data/‘
||upper(table_name)||
‘.dat‘ || chr(39) ||
‘ with DELIMITER ‘ || chr(39) || ‘|‘ || chr(39) ||‘;‘
from information_schema.tables
where table_schema=‘necsl‘ and
table_catalog=‘DEVELOP‘ ;
- After data import, you can verify that the import is correct by using the following methods
-
-Create the following tables in mysql and PG respectively
create table table_count (tblname varchar (100) primary key not null, tblrecorder integer);
/ * Execute the following statement in mysql, and copy the results and execute again, it will generate how many records for each table in the table table_count * /
SELECT CONCAT (‘insert into table_count select‘, “‘ ”, table_name,“ ‘”, ’, count (*) from‘,
table_name, ";")
FROM information_schema.tables
WHERE table_schema = ‘DEVELOP’;
-Execute the following statement in postgresql, and then execute the result after copying it, it will generate how many records for each table in the table table_count, and then compare with the table_count table in mysql
select ‘insert into necsl.table_count select‘ || quote_literal (table_name) || ’, count (*) from necsl.’ || table_name || ‘;’
from information_schema.tables
where table_schema = ‘necsl’ and
table_catalog = ‘DEVELOP’;
--If there is a problem with the import, you can execute the following sentence in the PG to generate a statement to clear all tables, copy the execution results and then execute to clear all tables
PG truncate
select ‘truncate table necsl.’ || table_name || ‘;’
from information_schema.tables
where table_schema = ‘necsl’ and
table_catalog = ‘DEVELOP’;
/ * Generate index information and foreign keys, execute the following statements in mysql, and copy the results to POSTGRESQL to execute * /
/ * Generate unique index * /
select
CONCAT (
‘CREATE UNIQUE INDEX’,
table_name, ‘_ IDX _’, index_name,
‘ON‘,
table_name,
‘(‘,
GROUP_CONCAT (column_name order by seq_in_index),
‘);’
)
from statistics
where table_schema = ‘DEVELOP’
AND INDEX_NAME <> ‘PRIMARY’
AND INDEX_SCHEMA = ‘DEVELOP’
AND NON_UNIQUE = 0
GROUP BY index_name, table_name
ORDER BY TABLE_NAME, INDEX_NAME, seq_in_index asc
/ * Generate btree index * /
select
CONCAT (
‘CREATE INDEX‘,
table_name, ‘_ IDX _’, index_name,
‘ON‘,
table_name,
‘(‘,
GROUP_CONCAT (column_name order by seq_in_index),
‘);’
)
from statistics
where table_schema = ‘DEVELOP’
AND INDEX_NAME <> ‘PRIMARY’
AND INDEX_SCHEMA = ‘DEVELOP’
AND NON_UNIQUE = 1
GROUP BY index_name, table_name
ORDER BY TABLE_NAME, INDEX_NAME, seq_in_index asc
/ * Generate foreign key * /
select
concat (‘alter table‘,
c.TABLE_NAME,
‘Add constraint‘,
c. CONSTRAINT_NAME,
‘Foreign key (’,
c.COLUMN_NAME,
‘) References‘,
c.REFERENCED_TABLE_NAME,
‘(‘,
c.REFERENCED_COLUMN_NAME,
‘);’)
from TABLE_CONSTRAINTS t, KEY_COLUMN_USAGE c
where t.CONSTRAINT_SCHEMA = ‘DEVELOP’
AND t.CONSTRAINT_TYPE = ‘FOREIGN KEY’
AND t.TABLE_SCHEMA = ‘DEVELOP’
AND c.REFERENCED_TABLE_SCHEMA = ‘DEVELOP’
AND t.CONSTRAINT_NAME = c.CONSTRAINT_NAME
and t.table_name = c.table_name;
17. After generating the above information, establish the following function in PG to reset all sequence (if no self-growing type can ignore this step), set up in public mode
CREATE OR REPLACE FUNCTION pro_resetallseq()
RETURNS void AS
$BODY$
DECLARE
tmp VARCHAR(512);
maxval bigint;
stmt record;
BEGIN
FOR stmt IN select sequence_name,REPLACE(sequence_name,‘_id_seq‘,‘‘) as tnm from information_schema.sequences where
sequence_catalog=‘DEVELOP‘ and sequence_schema=‘necsl‘ LOOP
tmp := ‘SELECT MAX(ID) FROM ‘|| quote_ident(stmt.tnm) || ‘;‘;
EXECUTE tmp into maxval;
if not (maxval is NULL or maxval = 0) then
EXECUTE ‘SELECT SETVAL(‘|| quote_literal(stmt.sequence_name) || ‘,‘ || maxval || ‘);‘;
end if;
END LOOP;
RAISE NOTICE ‘finished .....‘;
END;
$BODY$
LANGUAGE plpgsql
18. After successful setup, you can reset all sequence with the following statement
Select Pro_resetallseq ();
19. About functions in MySQL (stored procedures) because the syntax differs greatly, it can only be overridden in the PG. I haven't found a good way.
Because of the time is tight, so there is no writing tools, the process is relatively rough, only for reference, if there are good suggestions welcome message, thank you for reading.
This article is from the "Mirror" blog, please be sure to keep this source http://383133430.blog.51cto.com/454215/1686168
MySQL database migrated to PostgreSQL database (manual migration)