Platform environment: Linux Red Hat Enterprise linux Server release 6.0 (Santiago)
DB version: Oracle database 10g Enterprise Edition release 10.2.0.1.0-64bi
When the Oralce database is installed, the startup database will report the following error
When you start a database or create a database, a ORA-27125 error can occur, and I install Oracle 10.2.0.1 on Oracle Linux 6, which I encountered when I created the database.
The solution to this error is to modify the/proc/sys/vm/hugetlb_shm_group file.
The following is a problem that Lao Yang has mentioned, and the solution is the same:
To help customers solve a Linux database can not start the problem.
Customer's Linux 5.6 x86-64 environment, after the database is installed, start the database error: ORA-27125.
The description of the ORA-27125 error on the Oracle document is:
Ora-27125:unable to create shared memory segment
Cause:shmget () call failed
Action:contact Oracle Support
Query, found that the problem and Linux on the HUGETBL.
The workaround is also simple, first examining the group information for Oracle users:
[Oracle@yans1 ~]$ ID Oracle
uid=500 (Oracle) gid=502 (oinstall) groups=502 (Oinstall), 501 (DBA)
[Oracle@yans1 ~]$ More/proc/sys/vm/hugetlb_shm_group
0
Below, use root to add the DBA group to the system kernel by executing the following command:
# echo 501 >/proc/sys/vm/hugetlb_shm_group
Then start the database and the problem disappears.
So what's the Hugetlb_shm_group group? Here is an explanation:
Hugetlb_shm_group contains group ID is allowed to create SYSV shared memory segment using Hugetlb page
Another operating system validation error encountered during the installation process can be resolved as follows:
In the process of installing Oralce in a Linux system, if the Linux distribution is not a recommended version of Oracle, the following error may be reported, causing the Runinstaller to not complete:
Checking operating system version:must be redhat-3, SuSE-9, redhat-4, UnitedLinux-1.0, asianux-1 or asianux-2
Failed <<<<
This problem can be solved in the following three ways
1, modify the Linux distribution mark
If Oracle is installed on REDHAT-5, the contents of the file '/etc/redhat-release ' should be
Red Hat Enterprise Linux Server Release 5 (Tikanga)
Modify the version supported by Oracle
Red Hat Enterprise Linux Server Release 4 (Tikanga)
2, Runinstaller when adding-ignoresysprereqs parameters, such as:
./runinstaller-ignoresysprereqs
3. Modify the parameters of Oraparam.ini
Increase your system version number
4. Setting up a database
[Root@db-server ~] #id Oracle
uid=501 (Oracle) gid=502 (oinstall) groups=502 (Oinstall), 501 (DBA)
[Root@db-server ~] #echo 501 >/proc/sys/vm/hugetlb_shm_group
Then restart the database, the problem is resolved, but I found that the database server reboot, the problem will reappear, and must deal with the above command, in order to successfully start the database. Palliative, not permanent.
Refer to Http://wiki.debian.org/Hugepages, in fact, only to be set under the/etc/sysctl.conf Hugetlb_shm_group can solve this problem once and for all:
Create a group for users of Hugepages, and retrieve it's GID (is this example, 2021) then add yourself to the group.
Note:this should not is needed for Libvirt (see/etc/libvirt/qemu.conf)
% Groupadd My-hugetlbfs
% getent Group My-hugetlbfs
my-hugetlbfs:x:2021:
% AddUser Franklin My-hugetlbfs
Adding user ' Franklin ' to group ' MY-HUGETLBFS ' ...
Adding user Franklin to group My-hugetlbfs
Done.edit/etc/sysctl.conf and add this text to specify the number of pages for your want to reserve (* pages-size)
# Allocate 256*2mib for Hugepagetables (YMMV)
Vm.nr_hugepages = 256
# Members of Group MY-HUGETLBFS (2021) can allocate "huge" Shared memory segment
Vm.hugetlb_shm_group = 2021Create A mount point for the file system
% Mkdir/hugepagesadd This line in/etc/fstab (the mode of 1770 allows anyone into the group to create files but not unlink or rename each other ' s files.1)
Hugetlbfs/hugepages hugetlbfs mode=1770,gid=2021 0 0Reboot (This is the most reliable method of allocating huge pages bef Ore the memory gets fragmented. Don ' t necessarily have to reboot. You can try to run systclt-p to apply the changes. If grep "Huge"/proc/meminfo don t Show all of the pages, you can try to free the cache with sync; Echo 3 >/proc/sys/vm/drop_caches (where "3" stands for "purge Pagecache, Dentries and Inodes") then try Sysctl-p Agai N. 2)
Limits.conf
You are should configure the amount of memory a user can lock, so a application can ' t crash your operating system by locking All the memory. Note So any page can is locked in RAM and not just huge pages. You should allow the "process to lock a" little bit more memory that just the "the" for Hugepages.
# # Get Huge-page Size:
% grep "Hugepagesize:"/proc/meminfo
hugepagesize:4096 KB
# # What ' s current limit
% Ulimit-h-L
64
# # Just add them up ... (How many pages does you want to allocate?) Limits (Ulimit-l and Memlock in/etc/security/limits.conf).