OEMCC 13.2 Cluster Version installation deployment

Source: Internet
Author: User

Prior to testing the deployment of OEMCC 13.2 stand-alone, specific reference before the essay:

    • OEMCC 13.2 Installation Deployment

Environment: two hosts, System Rhel 6.5, respectively deployed OMS and OMR:
OMS, which is OEMCC's server ip:192.168.1.88 memory: 12g+ HDD: 100g+
OMR, which is the OEM bottom-level repository ip:192.168.1.89 memory: 8g+ HDD: 100g+

The equivalent of OMS and OMR are stand-alone versions, and then some customers have high requirements for monitoring systems, which requires clustering to improve high availability.
For OMR, you can build a corresponding version of RAC to solve a single point of failure, so how can you build a highly available cluster for OMS?
Recently encountered a customer with such high availability requirements, this article summarizes the OEMCC cluster version of the complete installation process.

    • 1. Requirements Description
    • 2. Environmental planning
    • 3.OMR cluster installation
    • 3.1 Environment Preparation
    • 3.2 GI Installation
    • 3.3 Creating ASM Disk Groups, ACFS cluster file systems
    • 3.4 DB Software Installation
    • 3.5 DBCA using templates to build libraries
    • 4.OMS cluster installation
    • 4.1 Environment Preparation
    • 4.2 Installing the Master node
    • 4.3 Adding the OMS node
    • 4.4 Testing OMS High availability
    • 5.SLB Configuration
1. Requirements Description

Customers require the deployment of OEMCC13.2 clusters, including clusters of OMR and OMS clusters, where OMR is a cluster that corresponds to the Oracle 12.1.0.2 rac;oms cluster requirements active-active mode and load balancing with SLB.

2. Environmental planning

Use two virtual machines for deployment. The configuration information is as follows:

You need to download the following installation media in advance:

--oemcc13.2安装介质em13200p1_linux64.binem13200p1_linux64-2.zipem13200p1_linux64-3.zipem13200p1_linux64-4.zipem13200p1_linux64-5.zipem13200p1_linux64-6.zipem13200p1_linux64-7.zip--oracle 12.1.0.2 RAC 安装介质:p21419221_121020_Linux-x86-64_1of10.zipp21419221_121020_Linux-x86-64_2of10.zipp21419221_121020_Linux-x86-64_5of10.zipp21419221_121020_Linux-x86-64_6of10.zip--dbca针对oemcc13.2的建库模版:12.1.0.2.0_Database_Template_for_EM13_2_0_0_0_Linux_x64.zip
3.OMR cluster installation

The OMR cluster is implemented with Oracle RAC: the template provided by OEMCC 13.2 requires a database (OMR) version of 12.1.0.2.

3.1 Environment Preparation
1) Configure the Yum source installation dependent RPM Package:

yum install binutils compat-libcap1 compat-libstdc++-33 e2fsprogs e2fsprogs-libs glibc glibc-devel ksh libaio-devel libaio libgcc libstdc++ libstdc++-devel libxcb libX11 libXau libXi libXtst make net-tools nfs-utils smartmontools sysstat

2) Each node shuts down the firewall, SELinux:

--各节点关闭防火墙:service iptables stopchkconfig iptables off--各节点关闭SELinux:getenforce修改/etc/selinux/config SELINUX= disabled--临时关闭SELinuxsetenforce 0

3) Configure the/etc/hosts file:

#public ip10.1.43.211 oemapp110.1.43.212 oemapp2#virtual ip10.1.43.208 oemapp1-vip10.1.43.209 oemapp2-vip#scan ip10.1.43.210 oemapp-scan#private ip172.16.43.211 oemapp1-priv172.16.43.212 oemapp2-priv

4) Create users, groups;

--创建group & user:groupadd -g 54321 oinstallgroupadd -g 54322 dbagroupadd -g 54323 opergroupadd -g 54324 backupdbagroupadd -g 54325 dgdbagroupadd -g 54326 kmdba  groupadd -g 54327 asmdba  groupadd -g 54328 asmoper  groupadd -g 54329 asmadmin  groupadd -g 54330 racdba    useradd -u 54321 -g oinstall -G dba,asmdba,backupdba,dgdba,kmdba,racdba,oper oracle  useradd -u 54322 -g oinstall -G asmadmin,asmdba,asmoper,dba grid  --然后给oracle、grid设置密码:passwd oracle passwd grid

5) Create the installation directory for each node (root user):

mkdir -p /app/12.1.0.2/gridmkdir -p /app/gridmkdir -p /app/oraclechown -R grid:oinstall /appchown oracle:oinstall /app/oraclechmod -R 775 /app

6) Shared LUN rule configuration:

vi/etc/udev/rules.d/99-oracle-asmdevices.ruleskernel== "sd*", bus== "scsi", program== "/sbin/scsi_id--whitelisted-- Replace-whitespace--device=/dev/$name ", result==" 36000c29ad39372db383c7903d31788d0 ", name=" Asm-data1 ", OWNER=" Grid ", group=" Asmadmin ", mode=" 0660 "kernel==" sd* ", bus==" scsi ", program=="/sbin/scsi_id--whitelisted-- Replace-whitespace--device=/dev/$name ", result==" 36000c298c085f4e57c1f9fcd7b3d1dbf ", name=" Asm-data2 ", OWNER=" Grid ", group=" Asmadmin ", mode=" 0660 "kernel==" sd* ", bus==" scsi ", program=="/sbin/scsi_id--whitelisted-- Replace-whitespace--device=/dev/$name ", result==" 36000c290b495ab0b6c1b57536f4b3cf8 ", name=" ASM-OCR1 ", OWNER=" grid ", group=" Asmadmin ", mode=" 0660 "kernel==" sd* ", bus==" scsi ", program=="/sbin/scsi_id--whitelisted-- Replace-whitespace--device=/dev/$name ", result==" 36000c29e7743dca47419aca041b88221 ", name=" Asm-ocr2 ", OWNER=" grid ", group=" Asmadmin ", mode=" 0660 "kernel==" sd* ", bus==" scsi ", program=="/sbin/scsi_id--whitelisted-- Replace-whitespace--device=/dev/$name ", result==" 36000c29608a9ddb8b3168936d01a4f7b ", name=" ASM-OCR3 ", owner=" grid ", group=" Asmadmin ", MODE=" 0660 "

After overloading the rules, confirm that the shared LUN name and permissions belong to the group:

[[email protected] media]# udevadm control --reload-rules [[email protected] media]# udevadm trigger[[email protected] media]# ls -l /dev/asm*brw-rw----. 1 grid asmadmin 8, 16 Oct  9 12:27 /dev/asm-data1brw-rw----. 1 grid asmadmin 8, 32 Oct  9 12:27 /dev/asm-data2brw-rw----. 1 grid asmadmin 8, 48 Oct  9 12:27 /dev/asm-ocr1brw-rw----. 1 grid asmadmin 8, 64 Oct  9 12:27 /dev/asm-ocr2brw-rw----. 1 grid asmadmin 8, 80 Oct  9 12:27 /dev/asm-ocr3

7) Kernel parameter modification:

vi /etc/sysctl.conf# vi /etc/sysctl.conf  增加如下内容:fs.file-max = 6815744  kernel.sem = 250 32000 100 128  kernel.shmmni = 4096  kernel.shmall = 1073741824  kernel.shmmax = 6597069766656kernel.panic_on_oops = 1  net.core.rmem_default = 262144  net.core.rmem_max = 4194304  net.core.wmem_default = 262144  net.core.wmem_max = 1048576  net.ipv4.conf.eth1.rp_filter = 2net.ipv4.conf.eth0.rp_filter = 1  fs.aio-max-nr = 1048576  net.ipv4.ip_local_port_range = 9000 65500  

Changes to take effect:

# /sbin/sysctl –p

8) Limitations of the user shell:

vi /etc/security/limits.conf#在/etc/security/limits.conf 增加如下内容:grid soft nproc 2047grid hard nproc 16384grid soft nofile 1024grid hard nofile 65536grid soft stack 10240oracle soft nproc 2047oracle hard nproc 16384oracle soft nofile 1024oracle hard nofile 65536oracle soft stack 10240

9) Plug-in authentication module configuration:

vi /etc/pam.d/login--加载 pam_limits.so 模块 使用 root 用户修改以下文件/etc/pam.d/login,增加如下内容:session required pam_limits.so

Description: The limits.conf file is actually a pam_limits.so profile in the Linux PAM (plug-in authentication module, pluggable authentication Modules) and is only pin-to-single session.

10) Each node sets the user's environment variables:

--1th node grid User: Export Oracle_sid=+asm1;export oracle_base=/app/gridexport Oracle_home=/app/12.1.0.2/grid; Export path= $ORACLE _home/bin: $PATH; export ld_library_path= $ORACLE _home/lib:/lib:/usr/lib; Export classpath= $ORACLE _home/jre: $ORACLE _home/jlib: $ORACLE _home/rdbms/jlib--2nd node grid User: Export oracle_sid=+asm2 ; Export Oracle_base=/app/gridexport Oracle_home=/app/12.1.0.2/grid; Export path= $ORACLE _home/bin: $PATH; export ld_library_path= $ORACLE _home/lib:/lib:/usr/lib; Export classpath= $ORACLE _home/jre: $ORACLE _home/jlib: $ORACLE _home/rdbms/jlib;--1th node ORACLE User: Export oracle_sid= OMR1; Export Oracle_base=/app/oracle;export Oracle_home=/app/oracle/product/12.1.0.2/db_1;export ORACLE_HOSTNAME=; Export path= $ORACLE _home/bin: $PATH; Export ld_library_path= $ORACLE _home/lib:/lib:/usr/lib; Export classpath= $ORACLE _home/jre: $ORACLE _home/jlib: $ORACLE _home/rdbms/jlib; --2nd node ORACLE User: Export ORACLE_SID=OMR2; Export Oracle_base=/app/oracle;export Oracle_home=/app/oracle/product/12.1.0.2/db_1;export ORACLE_HOSTName=;export path= $ORACLE _home/bin: $PATH; Export ld_library_path= $ORACLE _home/lib:/lib:/usr/lib; Export classpath= $ORACLE _home/jre: $ORACLE _home/jlib: $ORACLE _home/rdbms/jlib;
3.2 GI Installation

Unzip the installation media:

unzip p21419221_121020_Linux-x86-64_5of10.zipunzip p21419221_121020_Linux-x86-64_6of10.zip

To configure the display variable, call the graphical interface to install GI:

[[email protected] grid]$ export DISPLAY=10.1.52.76:0.0[[email protected] grid]$ ./runInstaller






















3.3 Creating ASM Disk Groups, ACFS cluster file systems

Configure the display variable to invoke the graphical interface to create ASM disk groups, ACFS cluster file systems:

[[email protected] grid]$ export DISPLAY=10.1.52.76:0.0[[email protected] grid]$ asmca

To create an ASM disk group:



To create a ACFS clustered file system:





3.4 DB Software Installation

Unzip the installation media:

unzip p21419221_121020_Linux-x86-64_1of10.zipunzip p21419221_121020_Linux-x86-64_2of10.zip

Configure the display variable to invoke the graphical interface to install the DB software:

[[email protected] database]$ export DISPLAY=10.1.52.76:0.0[[email protected] database]$ ./runInstaller

Install the DB software:












3.5 DBCA using templates to build libraries

Unzip the template file to the template directory, and then DBCA can choose to use it from these templates:

[[email protected] media]$ unzip 12.1.0.2.0_Database_Template_for_EM13_2_0_0_0_Linux_x64.zip -d /app/oracle/product/12.1.0.2/db_1/assistants/dbca/templates

DBCA Building the Library steps:
Note: The database character set strongly recommends selecting Al32utf8, which will be followed when configuring OMS.















4.OMS cluster installation

The OMS cluster requires active-active mode and load balancing with SLB.

4.1 Environment Preparation
The following environment preparation is a two-node synchronization operation for OMS:
Oracle User Environment Variables added:

#OMSexport OMS_HOME=$ORACLE_BASE/oms_local/middlewareexport AGENT_HOME=$ORACLE_BASE/oms_local/agent/agent_13.2.0.0.0

To create a directory:

su - oraclemkdir -p /app/oracle/oms_local/agentmkdir -p /app/oracle/oms_local/middleware

Revise the/etc/hosts to conform to the OEMCC requirements for host names (optional):

#public ip10.1.43.211 oemapp1 oemapp1.oracle.com10.1.43.212 oemapp2 oemapp2.oracle.com
4.2 Installing the Master node

To start the installation:

su - oracleexport DISPLAY=10.1.52.76:0.0./em13200p1_linux64.bin

Installation steps:















4.3 Adding the OMS node

This section uses OEMCC to add the OMS node, you need to add the agent first, and then add the OMS node:

说明:1./app/oracle/OMS是共享文件系统;2./app/oracle/oms_local是各节点本地的文件系统;3.OMR数据库的processes参数需要从默认300修改为600.

1) Add Agent









2) Add OMS node
Choose Enterprise Menu, Provisioning and patching, Procedure Library.
Locate the Add Oracle Management Service and click Launch.
Note: The associated ports of OMS, each of the OMS nodes, are as consistent as possible to avoid increasing the complexity of subsequent configuration maintenance.

















4.4 Testing OMS High availability

The OEMCC Web interface can be accessed normally using the IP addresses of node 1 and Node 2, respectively:

and any node is switched off, and the surviving node access is not affected.
Attached: command to operate the OMS start/stop/View status:

--查看oms状态$OMS_HOME/bin/emctl status oms$OMS_HOME/bin/emctl status oms –details--停止oms$OMS_HOME/bin/emctl stop oms$OMS_HOME/bin/emctl stop oms –all--启动oms$OMS_HOME/bin/emctl start oms
5.SLB Configuration

The product used for load balancing is Radware, which requires a Load Balancer engineer to configure. The following are the configuration requirements that are organized according to Oracle's official documentation, for reference:

Other specific configuration items, such as monitors, pools, Required Virtual servers, and so on, are based on this as a benchmark with official documents for planning and design, no longer repeat.

Add Load balancer address name resolution in/etc/hosts:

10.1.44.207 myslb.oracle.com

After the SLB configuration, OMS synchronization needs to be configured.

Configure OMS:

$OMS_HOME/bin/emctl secure oms -host myslb.oracle.com -secure_port 4903 -slb_port 4903 -slb_console_port 443 -slb_bip_https_port 5443 -slb_jvmd_https_port 7301 -lock_console -lock_upload

To configure the agent:

$AGENT_HOME/bin/emctl secure agent –emdWalletSrcUrl https://myslb.oracle.com:4903/em

To view OMS status:

[[email protected] backup]$ $OMS _home/bin/emctl status oms-detailsoracle Enterprise Manager Cloud Control 13c Relea  SE 2 Copyright (c) 1996, Oracle Corporation. All rights reserved.          Enter Enterprise Manager Root (sysman) password:console Server Host:oemapp1.oracle.comHTTP Console Port : 7788HTTPS Console port:7802http Upload port:4889https Upload port:4903em Instance Hom E:/app/oracle/oms_local/gc_inst/em/emgc_oms1oms Log Directory Location:/APP/ORACLE/OMS_LOCAL/GC_INST/EM/EMGC  _OMS1/SYSMAN/LOGSLB or virtual Hostname:myslb.oracle.comHTTPS slb Upload port:4903https slb Console Port:443https SLB JVMD port:7301agent Upload is locked. OMS Console is locked. Active CA id:1console url:https://myslb.oracle.com:443/emupload url:https://myslb.oracle.com:4903/empbs/uploadwls Domain informationdomain name:gcdomainadmin Server Host:oemapp1.oracle.comAdmin server HTTPS port:710 2Admin Server is RUnningoracle Management Server informationmanaged server Instance name:emgc_oms1oracle Management server Instance host:o Emapp1.oracle.comWebTier is uporacle Management Server are UPJVMD Engine is upbi publisher Server Informationbi Publisher M Anaged Server NAME:BIPBI Publisher server is UPBI publisher HTTP Managed Server port:9701bi publisher HTTPS Managed S  erver port:9803bi publisher HTTP OHS port:9788bi publisher HTTPS OHS Port:9851bi Publisher HTTPS SLB port:5443bi Publisher is locked. BI Publisher server named ' BIP ' running at Url:https://myslb.oracle.com:5443/xmlpserverbi Publisher Server Logs:/app/ora Cle/oms_local/gc_inst/user_projects/domains/gcdomain/servers/bip/logs/bi Publisher Log:/app/oracle/oms_local/gc_ Inst/user_projects/domains/gcdomain/servers/bip/logs/bipublisher/bipublisher.log

View Agent Status:

[[email protected] backup]$ $AGENT _home/bin/emctl status agentoracle Enterprise Manager Cloud Control 13c Release 2  Copyright (c) 1996, Oracle Corporation. All rights reserved.---------------------------------------------------------------Agent Version:13.2.0.0.0oms Version:13.2.0.0.0protocol version:12.1.0.1.0agent Home:/app/oracle/oms_local/agent/age Nt_instagent Log Directory:/app/oracle/oms_local/agent/agent_inst/sysman/logagent Binaries:/app/oracle/oms_       Local/agent/agent_13.2.0.0.0core JAR Location:/app/oracle/oms_local/agent/agent_13.2.0.0.0/jlibagent Process ID : 17263Parent Process id:17060agent url:https://oemapp1.oracle.com:3872/emd/main/local Agent URL In Nat:https://oemapp1.oracle.com:3872/emd/main/repository Url:https://myslb.oracle.com:4903/empbs/uploadstart Ed at:2018-10-12 15:49:58started by User:oracleoperating System:linux Version 2.6.32-696.el6.x86_64 (AMD64) Number of Targets:34last Reload: (none) last successful upload : 2018-10-12 15:50:53last attempted upload:2018-10-12 15:50:53total Megabytes         of XML files uploaded so Far:0.17number of XML files pending upload:19size of XML files pending upload (MB) : 0.07Available disk space on upload filesystem:63.80%collection status:collectio NS Enabledheartbeat Status:oklast attempted heartbeat to oms:2018-10-12 15:50 : 33Last successful heartbeat to oms:2018-10-12 15:50:33next scheduled heartbeat to Oms:2018-1 0-12 15:51:35---------------------------------------------------------------Agent is Running and ready

The final test, through the load Balancer address 10.1.44.207, can be accessed directly to OEMCC for normal operation:

At this point, the OEMCC13.2 cluster installation is complete.

OEMCC 13.2 Cluster Version installation deployment

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.