Oracle 12c was released for a long time, although several sets of single-instance Oracle 12c have been deployed in the near future, there has been no opportunity to implement 12c RAC in the production environment, of course, can not eliminate the existing 11g RAC, replaced by 12c RAC, and no new projects to come up, But always can not wait for the work of time to study the test, rainy day, just from Tsing lung there alms to a well-configured gigabyte mini-PC, just use it to do testing, install PROXMOX, virtual out a bunch of systems, and then have the conditional test deployment Oracle 12c RAC.
Oracle is load-balanced, completely independent of other third-party tools, it's all done, what a cow! Deploying Oracle Load Balancing is highly available, in effect deploying Oracle RAC. Before you start deploying, you need to plan ahead. The main aspects of planning involve the following:
1, shared storage: Oracle RAC is the most critical facilities, many important data files, archiving, arbitration and other files are stored here, so you need to consider the availability, capacity, performance, cost and other factors. In the previous projects, I used more storage for the external array, dual controller, 10000 to 2.5 "SAS or 15000 to 3.5" SAS disk, all slots full, regardless of short-term expansion.
2, Server: Computing resources rely on the server, but also need to consider the availability, performance and cost factors. In some of the projects implemented in the past, the general use of 1u Rack server, Memory 64G, multi-core multithreaded CPU, dual SSD disk (do raid fault tolerance), four network excuse card.
3, network planning: At least two network segments, switch independent (at least two switches), and for the full gigabit rate switch, network cable also use a mechanism of six class lines. Talking about this network cable, stepping on a pit, memory deep-a network of XXX project, the server stuffed with two cabinets, all kinds of facilities at that time are considered relatively high-end. Special Account purchase, must buy mechanism network cable. A bunch of people tossing and finishing, finally debugging finished, on-line operation Normal. But not long, the Oracle RAC Cluster will appear a good one will be bad situation, log in check log, check application, all find no problem, finally had to personally to the computer room. Look back and forth, look at the display of the indicators; It really does find the problem: The heartbeat switch Port has a light one will be green, a yellow, it must be the rate mismatch problem. Again, the line is different from other mechanism lines Ah! Asked to know that when the procurement, the supplier survived than the planned purchase of a few, so let the machine room to hand-made a top up. The problem is solved by changing the new mechanism six cable.
Once wrote a "Oracle 11g RAC Production Environment Deployment Details" article, published in the 51cto blog, the address is http://blog.51cto.com/sery/1546346, welcome reference. This article, because there is no real environment can provide (can not take the existing production environment to engage, I fear the boss to cut me), so only in the virtual environment, but this does not affect the learning and reference, after all, the basic ideas and methods are the same, but also conducive to do experiments and testing.
Prepare the base environment
I am in the Beijing, alms to a mini-host, configured as CPU 8 threads, 1TB hard disk, 12G of memory, take to do virtualization is very suitable, save electricity and quiet. Look, isn't it small?
With this mini-host virtualization, create 2 virtual machines, install Oracle, and create a virtual machine installation Openfiler as shared storage for Oracle.
Host Virtualization Processing
It is highly recommended to use PROXMOX, of course I use it myself. The current version is proxmox5.2, supports ceph hyper-fusion, great use, ISO one-click installation. The official website (www.proxmox.com) downloads the image package and writes the U-disk with UltraISO to enable it to boot. If the U-disk is unable to boot the load, do one more UltraISO write, write the format selected "Raw" as shown in:
PROXMOX installation process is simple and easy to complete, there is no more to say. Proxmox Bottom is based on Debian, in the process of operation, the system will execute Apt-get update to do package updates, in order to avoid "TASK Error:command ' apt-get update ' failed:exit Code 100" This error requires SSH login to the system (Debian), modify the file/etc/apt/sources.list.d/pve-enterprise.list, and comment out the only line inside. Of course, you can ignore it.
0 Multi-card processing
Maybe your experiment environment is the same as me, only a physical network card, but to achieve the Oracle RAC at least two network card, how to do? Add a piece of it, the specific method is as follows:
1, Proxmox Management interface Select "Create", then "Linux Bridge", fill in the IP address and mask (gateway and other items do not fill out)
2, make the network settings take effect. SSH login Debain, restart the system. Then log in to the system, with the command "IP add", you can see the virtual network interface just created. As shown in the following:
Similarly, this effect can be seen in the Proxmox Web management interface.
0 preparing the operating system image file
According to my master, there are two ways to upload operating system iOS image, one is Proxmox Web management interface, the other is to login to the Debian system, into the directory of the image file set, with Wget a class of tools directly obtained.
1,the Web interface upload iOS files (need to download good files on the local computer):
In several practice, always feel this method trouble, and slow, now generally do not adopt.
2, login system directly download, download only once, if it is a room server, than download to the local and then transmitted up, save a lot of time.
[Email protected]:~# Cd/var/lib/vz/template/iso
[Email protected]:/var/lib/vz/template/iso# wget http://mirrors.163.com/centos/7.5.1804/isos/x86_64/CentOS-7-x86_ 64-dvd-1804.iso
[Email protected]:/var/lib/vz/template/iso# wget http://mirrors.cn99.com/centos/6.10/isos/x86_64/CentOS-6.10-x86_ 64-bin-dvd1.iso
After downloading, go to the Web management interface to check if it appears in the project.
Create a virtual machine
Because the host resource configuration required for the installation of Oracle RAC is exactly the same, you can create a good one virtual machine and install the system (do not install Oracle yo), then clone the second virtual machine and change the network settings to go into use.
0 creating the first virtual machine
PROXMOX Web Management Interface Click "Create Virtual Machine" to set an easy-to-recognize name for the virtual machine, such as db107, and then proceed to the next step.
"Operating System" section, select "Use CD/DVD disc image (ISO)", down the list box to select the pre-uploaded operating system ISO. As shown in the following:
A step allocation disk (to 32G), CPU (4core), Memory (8G), after creation, does not meet the requirements. Additional hard drives are required to make the Oracle installation directory and to create swap partitions, and to increase the network interface for heartbeat detection between Oracle nodes.
1. Add a hard drive to the virtual machine:
The management interface selects the virtual machine that you just created, in the Select Cascade menu "Hardware", click the button "add";
Set the size to 50G, plan 16G for swap, and the rest for the installation software catalog.
2. Add the network interface:
The steps to add a hard disk are basically the same, just to "add" this drop-down list here, select "Network Device", the specific choice is as follows:
Installing the virtual machine operating system
After creating the virtual machine, the Web management interface launches the virtual machine and then clicks the ">_ Console" button in the page to enter the operating system installation interface:
The remaining steps are no different from the conventional system installation and are not mentioned.
Installing shared storage Openfiler
Openfiler, like Proxmox, is also provided as an ISO, Openfiler also requires at least two disks, one installation system, and one for data sharing. After planning the capacity allocation, you can start the installation, the installation process is simple, no longer verbose.
Installs the Openfiler disk usage for me, where the large capacity disk is used for iSCSI sharing.
Next, start configuring storage. Click the "Service" item to open the iSCSI service.
Create the partition (Linux physical Volume) on the free bulk disk, then create the volume group Vg-data (Name your own) and the logical volume, and when you create the logical volume, the filesystem/volume type (file system/volume type) is selected from the drop-down list box Block (ISCSI,FS,ETC) ". When this is done, the mouse points to the right "iSCSI Targets" menu, add a new Issci target (add new iSCSI target), if the iSCSI service is not started, the Add button (add) is grayed out and cannot proceed to the next step.
Complete the logical unit (LUN) mapping, as follows:
Because it is an internal network, you can not restrict access. This is where the storage end configuration is complete.
Server Mount iSCSI disk (requires operation on two hosts)
With just a few simple steps, you can hook up the iSCSI shared disk on the host and boot it up with the system.
0 start the iSCSI service. The CentOS default may not be installed with a familiar and useful Ntsysv,yum installation. Execute NTSYSV The iSCSI item is selected, and the next time you turn it on, the iSCSI service will automatically come up.
0 scan the iSCSI target and record the output information as follows:
[Email protected] ~]# iscsiadm-m discovery-t sendtargets-p 172.16.35.107
172.16.35.107:3260,1 iqn.2006-01.com.openfiler:tsn.3ceca0a95110
What is needed is a piece of information that is thicker behind the number "1".
0 mount the target disk with the following instructions:
# iscsiadm-m Node-t iqn.2006-01.com.openfiler:tsn.3ceca0a95110–l
Logging in to [Iface:default, target:iqn.2006-01.com.openfiler:tsn.3ceca0a95110, portal:172.16.35.107,3260] ( multiple)
Login to [Iface:default, target:iqn.2006-01.com.openfiler:tsn.3ceca0a95110, portal:172.16.35.107,3260] successful.
0 Disk hook-up verification, two hosts are executed once, instructions are as follows:
[Email protected] ~]# fdisk–l
.................................... Omitted.......................................
disk/dev/sdc:51.2 GB, 51170508800 bytes, 99942400 sectors
Units = sectors of 1 * MB = bytes
Sector size (logical/physical): bytes/512 bytes
I/O size (minimum/optimal): bytes/512 bytes
disk/dev/sdd:122.9 GB, 122876329984 bytes, 23xxx832 sectors
Units = sectors of 1 * MB = bytes
Sector size (logical/physical): bytes/512 bytes
I/O size (minimum/optimal): bytes/512 bytes
disk/dev/sde:10.2 GB, 10234101760 bytes, 19988480 sectors
Units = sectors of 1 * MB = bytes
Sector size (logical/physical): bytes/512 bytes
I/O size (minimum/optimal): bytes/512 bytes
In this way, three volumes are shared and hooked up on each node.
Deploy Oracle 12c RAC
Three phases: Preparation before installation, installation of software, creation of a database.
Pre-Installation Preparation
The main steps are: preparing to swap partitions and data partitions, setting related host names and IP mappings, modifying system-dependent configuration and dependency packages, and preparing the desktop environment.
Prepare the swap partition, which needs to be executed at each node.
Fdisk/dev/sdb
Mkswap/dev/sdb1
Swapon/dev/sdb1
During FDISK operation, the partition code is selected "82" and the size is 18G. After you have done this, use the command free–m to check if it takes effect. In order to let the swap partition with the system boot load, you need to modify the file/etc/fstab, added content and so on after the addition of data partition, and posted.
For more information, see the column "Load Balancer Master Practice", which can be directly punched here .
CentOS 7 deploys Oracle 12c RAC serving