Summary of Datapump data migration practices

Source: Internet
Author: User

Summary of Datapump data migration practices

Although a lot of data migration projects have been put into practice, some small differences may make a big difference in the entire data migration solution. Data is the core lifeblood of the system. Therefore, it is the most basic requirement for DBA to ensure data consistency and accuracy. One of my basic points is that high availability requires a maintenance window unless otherwise required. This method is more conservative but more secure.

But we still encountered a lot of small problems during the Datapump migration, which may be some experiences or suggestions.

1) For cross-platform data migration, you need to get a list of invalid objects and whether to re-compile them before the upgrade. If you are not sure, you will be more confused after the migration, is it a problem after migration.

2) during data migration, we recommend that you stop listening directly to ensure that there are no other external connections. In the previous large-scale data migration, although there are some restrictions on the verbal system, however, it cannot be completely guaranteed that others can fully comply with the rules. Sometimes the application's colleagues need to check some data in advance and may want to make some queries, which is more difficult to control. In addition, it is likely to trigger some minor problems, especially performance problems.

3) if the conditions allow for data migration, we recommend that you set the non-archive mode directly. This mode has some disadvantages and advantages. The advantage is that the overall speed will be doubled, however, the disadvantage is that the architecture of the master and slave databases will need to be rebuilt. In addition, in the post-data migration stage, the collection of statistical information will consume a lot of time. If it is not an archive mode, you must wait until the migration is complete. If the time window permits and you need to ensure the architecture of the master and slave databases, you can only use the archive mode. The advantage is that the architecture of the master and slave databases is retained, there is no need to rebuild. Another point is to compile the storage process and collect statistics. In fact, some basic verification and testing can be started internally. Because some internal processes and steps take some time, this time period can be fully combined. The disadvantage is also obvious, with lower efficiency and extra space required. the bandwidth required for synchronizing incremental data is high. This is a double-edged sword.

4) Export dump in the source database. do not enable too many transmission processes when it is transferred to the target database. This may cause serious impact on connection from other clients, there are also some things to be aware of. Sometimes it is worth pondering. For example, if there are 1000 dump, we certainly cannot start 1000 processes for simultaneous transmission. We can only start a small part, data transmission is always retained. There are several considerations for this continuous process. One is a batch of data, for example, 30 dump at a time. After the process is completed, 30 dump transmission is enabled. In addition, 30 dump transmission processes are enabled based on time, but the backend is always guaranteed to run even 30 dump transmission processes. The first method can be used as the script mode, but the controllability and flexibility are slightly lower. The second method is semi-automated, which requires a lot of manual intervention.

5) During dump transmission, you still directly use a fixed IP address instead of a bound floating IP address. This performance is very different from what I personally think. In the drill, you should use the same hardware environment, it takes about 40 minutes to transmit the same data volume. However, when binding an IP address, the performance of IP address drifting is much worse, and the time is doubled.

6) If the IP address of the target server needs to be changed to the IP address of the source server after the migration, the database Link is a tangle in the process, with the related stored procedures, packages, views, and so on that contain DB links. In this process, it is recommended that you modify the IP address before migration to ensure that the firewall information is consistent with that of the source database, so that the DB Link pitfall will be avoided. Every minute of the Migration and upgrade time needs to be obtained as much as possible. The system-level network timeout time is one minute. If a large number of stored procedures have too many dependencies, this compilation process will be compromised.

For example, the stored procedure compilation we encounter during migration.

Alter procedure "TEST". "P_TEST" COMPILE PLSQL_OPTIMIZE_LEVEL = 0 PLSQL_CODE_TYPE = INTERPRETED PLSQL_DEBUG = TRUPLSCOPE_SETTI

NGS = ''reuse settings timestamp '2017-09-18 07:35:33'

In fact, the time spent in this compilation process is all over the network verification timeout.

7) the migration drill is very important. Try to fully simulate the entire migration process. If you have trouble skipping some steps or think it may not affect the migration process, skip some small steps, then these problems will be handed over to the migration time, and it will be very painful to solve the problems.

8) the more adequate the preparations before migration, the easier the migration will be. At the end of the migration, there is a check list and step, especially when the job is not in the status, this is an outline and guidelines.

9) migration is a tough task. You need to always pay attention to and take care of possible problems. You need to calm down in emergencies. Of course, it is useless to say this, and you have a good understanding in practice.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.