Preparations before Datapump data migration (2)

Source: Internet
Author: User

Preparations before Datapump data migration (2)

I wrote an article to analyze some preparations for Datapump data migration, and the response was good. In a recent scenario, it is better to use Datapump based on evaluation. The main reasons are as follows:

1. The original environment is under Solaris, and the hardware resources are old and need to be migrated to Linux. cross-platform migration takes precedence over logical migration.

2. The original environment uses 10gR2. Now we need to migrate it to 11gR2 to fully solve the problem that the slave database is not used.

3. The migrated data volume is not large. within several hundred GB, you can make full use of the bandwidth and I/O throughput to reach the expected time window.

In addition to this solution, considering the improvement of performance, we adopted the PCIE-SSD solution to accelerate I/O, of course, using different partitions from the source database.

In order to minimize the impact of the application, we decided to switch the IP address after the migration so that the new database environment has the original IP address, so that the application end does not need to modify any connection information, the DB Link problem can also be solved together, without the need to confirm more details.

If the application has a reconnection mechanism, this scheme is completely transparent to the application, just like starting or stopping the application.

Before using Datapump for migration, this solution still looks like a trigger, but there are some hidden risks and issues that need to be solved in advance. I don't know if you have any ideas on the background I provided.

1. in order to reduce the complexity and potential risks caused by IP address switching. ora, tnsnames. all the host information in ora is the host name, which can be modified in/etc/hosts. After switching the IP address, you only need to modify this configuration.

2. the firewall settings of Solaris are quite different from those of Linux. There is a lot of information to confirm.

The firewall activation in the Solaris environment is similar to the following:

If you want to enable port 1522 access permissions for the 10. xxxx IP address, use the following method to configure both the memory and files.

In-memory settings, which take effect online. e1000g0 indicates the name of the NIC, which is the same as eth0 and eth1 in Linux.

Echo 'pass in quick on e1000g0 proto tcp from 10. xxxxx to any port = 100' | ipf-f-

Add in the file

/Etc/ipf. conf | pass in quick on e1000g0 proto tcp from 10. xxxxx to any port = 1522

In Linux, it is much simpler, similar to the following format.

Iptables-I input-s 10. xxxx-p tcp-m multiport -- dports 1522-I eth0-j ACCEPT

If you want to write a configuration file, you can directly use service iptables save

It took me some time to change the configuration information, including some space classes and some syntax differences. Finally, I simply adjusted them manually.

3. for the target library settings, there is a great hidden danger, that is, the source Library and the target library file path is different, I also mentioned above the use of PCIE-SSD using different partitions, therefore, if you directly use full-Database Import, there will certainly be a hidden danger, not an error, but a waste of resources. For example, if the file path in the source database is/U01/xxxx and the path in the target database is/U02/xxx, in this case, if the full database is imported, the generated tablespace is, the data files will all be in/U01. If the data is reflected after the migration is completed, it will be a bit late. You have to migrate the data again, either re-create the control file or rename it directly, within a limited period of time in the upgrade window, such emergencies may not take a minute or two. fear and confusion may take at least 10 minutes.

4. for unknown problems, I also have some supplementary ideas to export data in the source database. If Big parallelism is enabled, there is a hidden risk that the old server still has potential risks, if a crash occurs, everyone will be confused. The urgent solution is to implement a Failover and continue to export the data on the slave database. If the problem persists, there is also remote slave database 2, and the next migration will be scheduled for Failover. Of course, what I'm talking about may be a very small probability, but if you think about it seriously, it will be difficult to solve the problem.

5. Of course, one advantage of monitoring is that it can be monitored in Linux. There are still some concerns in Solaris, so only Orabbix monitoring is enabled.

The last step is to carefully handle all possible problems, make overall plans, and keep everything under control.

Preparations before Datapump data migration

Summary of Datapump data migration practices

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.