Solution to MongoDB connection failure exceeding 1000 on CentOS6

Source: Internet
Author: User
Tags install mongodb

Solution to MongoDB connection failure exceeding 1000 on CentOS6

Problem description:

When the production environment finds that the CPU is running at full capacity, the number of connections to MongoDB cannot exceed 1000.


1. Check mongodb logs and report the following error:

Wed Nov 21 15:26:09 [initandlisten] pthread_create failed: errno: 11 Resource temporarily unavailable

Wed Nov 21 15:26:09 [initandlisten] can't create new thread, closing connection

2. Test on the same CentOS5 machine and find that there is no problem with connecting 2000 connections.
3. Search for the problem on google. the keyword "mongod. conf can't create new thread, closing connection"
4. Locate the problem. The original centos6 is different from the previous centos5. A default nproc configuration file is added:/etc/security/limits. d/90-nproc.conf. By default, the nproc of a common user is set to 1024, while mongodb is running with a non-root user named mongod, so the number of connections cannot be reached.
5. Change/etc/security/limits. d/90-nproc.conf and change 1024 to 20480 to solve the problem.

[Root @ test ~] # Cat/etc/security/limits. d/90-nproc.conf

# Default limit for number of user's processes to prevent

# Accidental fork bombs.

# See rhbz #432903 for reasoning.

* Soft nproc 20480

Maximum number of opened file handles and user processes:

When deploying applications in Linux, sometimes the Socket/File: Can't open so many files problem occurs. This value also affects the maximum number of concurrent requests on the server, in fact, Linux has a file handle limit, and Linux is not very high by default, usually 1024. In fact, it is easy to use production servers to reach this number. The following describes how to correct the default value of the system through positive solution configuration.

View Method

We can use ulimit-a to view all limit values.

[Root @ test ~] # Ulimit-

Core file size (blocks,-c) 0

Data seg size (kbytes,-d) unlimited

Scheduling priority (-e) 0

File size (blocks,-f) unlimited

Pending signals (-I) 256469

Max locked memory (kbytes,-l) 64

Max memory size (kbytes,-m) unlimited

Open File (-n) 64000

Pipe size (512 bytes,-p) 8

POSIX message queues (bytes,-q) 819200

Real-time priority (-r) 0

Stack size (kbytes,-s) 10240

Cpu time (seconds,-t) unlimited

Max user processes (-u) 65536

Virtual memory (kbytes,-v) unlimited

File locks (-x) unlimited

"Open files (-n)" is a Linux operating system's limit on the number of file handles opened by a process. The default value is 1024.
(It also contains the number of opened sockets, which may affect the number of concurrent connections to the database ).

The correct method should be to modify/etc/security/limits. conf.
There are detailed annotations, such
Hadoop soft nofile 32768
Hadoop hard nofile 65536

Hadoop soft nproc 32768
Hadoop hard nproc 65536

You can change the file handle restriction to soft 32768 and hard 65536. The domain at the beginning of the configuration file is set as an asterisk to represent the global, and you can also make different restrictions for different users.

Note: In this case, the hard limit is the actual limit, while the soft limit is the warnning limit, which only makes the warning; in fact, the ulimit command itself has the hardware and software settings, and the plus-H is the hard, -S is soft

Soft restrictions are displayed by default. If the ulimit command is not added during modification, the two parameters are changed together.

Modify nproc in/etc/security/limits. d/90-nproc.conf.

How to modify the connection limit:

Temporary modification (change the number of processes that a user can open in the current shell ):

# Ulimit-u xxx

Permanent modification. The insurance policy is to modify/etc/security/limits. d/90-nproc.conf and/etc/security/limits. conf at the same time as follows:

Limits_conf =/etc/security/limits. conf:
* Soft nproc s1
* Hard nproc h1

Nproc_conf =/etc/security/limits. d/90-nproc.conf:
* Soft nproc s2
* Hard nproc h2

S1, h1, s2, h2 must be a meaningful number. At this time, the value displayed by ulimit-u is = min (h1, h2)

Therefore, s1 = s2 = h1 = h2 is usually set. For example, add the following parameters to both limits_conf and nproc_conf:
* Soft nproc 65536
* Hard nproc 65536

For more MongoDB tutorials, see the following:

CentOS compilation and installation of php extensions for MongoDB and mongoDB

CentOS 6 install MongoDB and server configuration using yum

Install MongoDB2.4.3 in Ubuntu 13.04

MongoDB beginners must read (both concepts and practices)

MongoDB Installation Guide for Ubunu 14.04

MongoDB authoritative Guide (The Definitive Guide) in English [PDF]

Nagios monitoring MongoDB sharded cluster service practice

Build MongoDB Service Based on CentOS 6.5 Operating System

MongoDB details: click here
MongoDB: click here

This article permanently updates the link address:

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.