NGINX+UWSGI Performance Tuning __uwsgi on CentOS 7

Source: Internet
Author: User
Tags auth server port


In the previous chapter, the Nginx +uwsgi was built up, and the keystone was mounted behind. However, a problem was found, if the HTTP request reached a certain amount, nginx directly returned 502. This makes the need to deploy large-scale OpenStack cluster I am very headache, compared to the request processing power, Keystone API interface can withstand a certain amount of large-scale concurrent request. This allows me to think about a problem, why Nginx as a web, performance is not Keystone API own load-carrying capacity is good. So I did the pressure test and looked up the reason.






Server configuration: cpu+200g mem+1t Disk



System: CentOS 7.1



Deployment: Kesytone Nginx+uwsgi






First deploy the Keystone to the Nginx+uwsgi back end



On another server in the same computer room to initiate the pressure test:


Ab-r-N 100000-c 100-h "user-agent:python-keystoneclient"-H "accept:application/json"-H "x-auth-token:65e194" htt p://keystonehost:35357/v2.0/


HTTP processing was found to be normal. Then change the concurrency count to 200. found that about 50% of the return to 502.



Then I will keystone-api instead of Keystone own service Openstack-keystone. Again with 200 concurrent pressure test: The discovery processing is normal, did not return 502.


Ab-r-N 100000-c 200-h "user-agent:python-keystoneclient"-H "accept:application/json"-H "x-auth-token:65e194" htt p://keystonehost:35357/v2.0/





Probably know that the problem should be on nginx and UWSGI, limiting the number of concurrent processing.






Checked the official information of Nginx Uwsgi. The original tuning has several parameters, and the system itself needs to do some tuning



1. First, take a look at the configuration that affects processing in nginx.conf.





User Nginx;
Worker_processes xx; #Can be set to cpu number, Experience better performance

Error_log/var/log/nginx/error.log;
Pid/var/run/nginx.pid;


Worker_rlimit_nofile 65535;  # Maximum number of open files, this value requires <= worker_connections


events {
worker_connections 65535; # Maximum number of connections, this value depends on the configuration of the system.

}





2. Look at the configuration of the system sysctl.conf


Net.core.somaxconn = 2048 #Defines the length of the largest listening queue for each port in the system, this is a global parameter. The default is 128. Optimization can be optimized according to system configuration





3.uwsgi Configuration Optimization/etc/uwsgi.d/admin.ini


Workers = 24 # Number of concurrent processing processes
Listen = 65535 # concurrent socket connections. The default is 100. Optimization needs to be based on system configuration


Before doing optimization, it is found that the number of concurrent is not 100. The reason is here, the default link for Uwsgi is 100.



Finish tuning, this pressure measured performance, concurrent to 10000:


Ab-r-N 100000-c 10000-h "user-agent:python-keystoneclient"-H "accept:application/json"-H "x-auth-token:65e194" H ttp://keystonehost:35357/v2.0/


Pressure test Report:


Server software:nginx/1.8.1
server hostname:keystonehost
server port:35357
Document Path:/v2/
Document length:450 bytes
concurrency level:15000 time
taken for tests:30.136 seconds
Complete 0000
Failed requests:0
Write errors:0 total
transferred:72900000 bytes
HTML transferred:45000000 Bytes Time

per request:4520.417 [MS] (mean)

Transfer rate:2362.33 [Kbytes/sec] Received





This time switch to Keystone own service Openstack-keystone. When concurrency is 1000.



Pressure test Report:





Server Software:
server hostname:keystonehost
server port:35357
document Path:/v2/
document Length: 450 bytes
concurrency level:10000 time
taken for tests:100.005
seconds Complete requests:100000 Failed requests:0
Write errors:0 total
transferred:70800000 bytes
HTML

transferred:45000000 Time per request:10000.507 [MS] (mean)

Transfer rate:691.37 [Kbytes/sec] Received





As you can see, the API service has reached its limit when the Openstack-keystone service is 1000 concurrent.






So nginx+uswgi after tuning, Nginx high-performance immediately reflected.



If you have a better way of tuning, you are welcome to post it together to learn Kazakhstan.



This article is from the "Nginx Build Keystone" blog, please be sure to keep this source http://evawalle.blog.51cto.com/9555145/1750801


Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.