High-performance framework gevent and gunicorn applications and performance tests on the web

Source: Internet
Author: User

What are WSGI servers:


For example, Flask, webpy, Django, and CherryPy all carry WSGI servers. Of course, the performance is not good. The built-in web server is mostly used for testing purposes. During online release, the high-performance wsgi server or the uwsgi with nginx is used.


As the WSGI definition says, the Protocol defines a set of interfaces to standardize or unify the communication between the server and the application ). What kind of interface is this? Very simple, especially for applications.

Source http://rfyiamcool.blog.51cto.com/1030776/1276364

650) this. width = 650; "src =" http://www.bkjia.com/uploads/allimg/131227/1F35U4P-0.png "title =" 2283144_1319162549AHXr.png "/>

The Gunicorn is an HTTP server of Python wsgi unix. This is a pre-fork worker model, transplanted from Ruby's Unicorn) project. The Gunicorn server is compatible with various Web frameworks. We only need simple configuration and execution, lightweight resource consumption, and very fast. It is characterized by close integration with various web services and convenient deployment. There are also many disadvantages. HTTP 1.1 is not supported and concurrent access performance is not high.


Install gunicorn ~

Pip install gunicorn


650) this. width = 650; "src =" http://www.bkjia.com/uploads/allimg/131227/1F35W118-1.jpg "title =" 20.0.jpg"/>


Here we will talk about the usage of gunicorn.

The simplest operation method is:


gunicorn code:application


Code is code. py, and application is the name of wsgifunc.


By default, gunicorn serves as a web server listening for 127.0.0.1: 8000. It can be accessed through http: // 127.0.0.1: 8000 on the local machine.



If you want to access through the network, you need to bind different addresses, you can also set the listening port ):


gunicorn -b 10.2.20.66:8080 code:application#from http://rfyiamcool.blog.51cto.com


To support more concurrent access and make full use of resources on multi-core servers, you can use more gunicorn processes:


gunicorn -w 8 code:application


In this way, eight processes can be started to process HTTP requests at the same time, improving the system's efficiency and performance.


In addition, gunicorn uses the synchronous blocking network model (-k sync) by default, which may not be good enough for High-concurrency access. It also supports other better modes, such as gevent or meinheld.

Source http://rfyiamcool.blog.51cto.com/1030776/1276364

# Gevent

Gunicorn-k gevent code: application

# Meinheld

Gunicorn-k egg: meinheld # gunicorn_worker code: application

Of course, to use these two things, you need to install them separately. For details, refer to their respective documents.


You can also use the-c parameter to input a configuration file.


Gunicorn configuration file

[root@66 tmp]# cat gun.confimport osbind = '127.0.0.1:5000'workers = 4backlog = 2048worker_class = "sync"debug = Trueproc_name = 'gunicorn.proc'pidfile = '/tmp/gunicorn.pid'logfile = '/var/log/gunicorn/debug.log'loglevel = 'debug'



Python web example

[root@66 tmp]# cat xiaorui.pyfrom flask import Flaskfrom flask import render_template_stringimport osfrom werkzeug.contrib.fixers import ProxyFixapp = Flask(__name__)@app.route('/')def index():    return "worked!"app.wsgi_app = ProxyFix(app.wsgi_app)if __name__ == '__main__':    app.run()


Run its own demo first ~

650) this. width = 650; "src =" http://www.bkjia.com/uploads/allimg/131227/1F35WU0-2.jpg "title =" ce.jpg "/>

Source http://rfyiamcool.blog.51cto.com/1030776/1276364

The result is:

650) this. width = 650; "src =" http://www.bkjia.com/uploads/allimg/131227/1F35Va5-3.jpg "title =" ceshi2_ 文.jpg "/>


The result is OK ~ Of course, the running instance is also simple ~


650) this. width = 650; "src =" http://www.bkjia.com/uploads/allimg/131227/1F35VL6-4.jpg "style =" float: none; "title =" 2222.jpg"/>

650) this. width = 650; "src =" http://www.bkjia.com/uploads/allimg/131227/1F35VV9-5.jpg "style =" float: none; "title =" 33333.jpg"/>


Cpu consumption ~

The second problem is that the flask web server has a response error under pressure... I used to test the stress of tornado web. py flask django botto, and asked a friend to write a cc tool to do the test...

The result is that tornado is really good, followed by flask, followed by web. py, and the worst is django.

The compression of django itself is really a headache. Fortunately, everyone is doing load on nginx.

650) this. width = 650; "src =" http://www.bkjia.com/uploads/allimg/131227/1F35Q556-6.jpg "title =" 444.jpg"/>

After testing a single instance, we started to test the high-performance artifact gunicorn for wsgi.

650) this. width = 650; "src =" http://www.bkjia.com/uploads/allimg/131227/1F35T403-7.jpg "title =" QQ20130819000222.jpg "/>


After startup, the following error occurs:

2013-08-12 21:59:34 [2097] [INFO] Starting gunicorn 17.52013-08-12 21:59:34 [2097] [DEBUG] Arbiter booted2013-08-12 21:59:34 [2097] [INFO] Listening at: http://127.0.0.1:5000 (2097)2013-08-12 21:59:34 [2097] [INFO] Using worker: sync2013-08-12 21:59:34 [2102] [INFO] Booting worker with pid: 21022013-08-12 21:59:34 [2103] [INFO] Booting worker with pid: 21032013-08-12 21:59:34 [2104] [INFO] Booting worker with pid: 21042013-08-12 21:59:34 [2105] [INFO] Booting worker with pid: 2105


Let's test the performance ~

650) this. width = 650; "src =" http://www.bkjia.com/uploads/allimg/131227/1F35S140-8.jpg "title =" 444.jpg"/>



It took about 6 seconds last time, and this time it took about 2.4 seconds to use gunicorn ..... The speed comparison is clear ~

To increase the speed, you can change the number of workers in the gun. conf configuration file.


The cpu consumption is average to each process, rather than independent on the flask web server.


650) this. width = 650; "src =" http://www.bkjia.com/uploads/allimg/131227/1F35TH5-9.jpg "title =" 555.jpg"/>



Now let's test the strength of gevent as the wsgi Gateway Interface ~


A demo of flask ~

650) this. width = 650; "src =" http://www.bkjia.com/uploads/allimg/131227/1F35U4U-10.jpg "title =" QQ20130819004120.jpg "/>


Gevent wsgi configuration, I will first make a simple configuration ....

If you want to see the instance, you can go to the wsgi demo on the gevent official website and have a programming interface attached to it...


from gevent.wsgi import WSGIServerfrom a import apphttp_server = WSGIServer(('', 11111), app)http_server.serve_forever()


Source http://rfyiamcool.blog.51cto.com/1030776/1276364

We started to test the concurrency capability of gevent.

Server:

650) this. width = 650; "src =" http://www.bkjia.com/uploads/allimg/131227/1F35QU1-11.jpg "title =" 6666.jpg"/>

Client:

650) this. width = 650; "src =" http://www.bkjia.com/uploads/allimg/131227/1F35R941-12.jpg "title =" 8888.jpg"/>


The number of seconds is displayed ~ I won't talk about it anymore ~ Everyone understands ~

Let's adjust it a bit ~

650) this. width = 650; "src =" http://www.bkjia.com/uploads/allimg/131227/1F35WP5-13.jpg "title =" 1111.jpg"/>


In fact, gunicorn calls gevent workers Code similar to this principle (uwsgi + gevent is also similar ).


#!/usr/bin/env python# coding:utf-8                                                                                                                                                                              import sysimport os                                                                                                                                                                              import geventimport gevent.monkeyimport gevent.wsgiimport gevent.servergevent.monkey.patch_all()                                                                                                                                                                              import socketimport multiprocessing                                                                                                                                                                              def app(environ, start_response):    start_response('200 OK', [('Content-Type','text/plain')])    yield str(socket.getaddrinfo('xiaorui.cc', 80))                                                                                                                                                                              def handle_signals():    gevent.sleep(sys.maxint)                                                                                                                                                                              if __name__ == '__main__':    listenner = gevent.server._tcp_listener(('', 8002), backlog=500, reuse_addr=True)    for i in xrange(multiprocessing.cpu_count()*2):        server = gevent.wsgi.WSGIServer(listenner, app, log=None)        process = multiprocessing.Process(target=server.serve_forever)        process.start()    handle_signals()


Uwsgi now supports gevent:


uwsgi --plugins python,gevent --gevent 100 --socket :3031 --module myapp



In short, the combination of gunicorn and gevent, or gunicorn + gevent, is worth trying.


Source http://rfyiamcool.blog.51cto.com/1030776/1276364


Is my recommended network framework ~ This framework is similar to the uwsgi method. It is implemented on the frontend port of nginx pass_proxy to the app, and uwsgi or gunicorn is used for collaborative processing.


650) this. width = 650; "src =" http://www.bkjia.com/uploads/allimg/131227/1F35U622-14.png "title =" wsgi-nginx-gunicorn-supervisor.png "/>



server {    listen 80;    server_name xiaorui.cc;                                                                                                                                                                                                                                                                                                                                                                                      root /www/xiaorui;                                                                                                                                                                                                                                                                                                                                                                                      access_log xiaorui/access.log;    error_log xiaorui/error.log;                                                                                                                                                                                                                                                                                                                                                                                      location / {        proxy_set_header X-Forward-For $proxy_add_x_forwarded_for;        proxy_set_header Host $http_host;        proxy_redirect off;        if (!-f $request_filename) {            proxy_pass http://127.0.0.1:8000;            break;        }    }




Front-end Nginx load, several Gunicorn processes run with several cores, and several processes can be provided by gunicorn compared with the subsequent app.

Compared with uWSGI, Gunicorn provides better support for "coroutine", that is, Gevent.

This facilitates future business expansion and refined operation. In terms of performance, Gunicorn + Gevent will not be much weaker than uWSGI. After all, it is not easy for the latter pure C to have such performance. Compared with Bjoern, which is the strongest in WSGI Server, gunicorn also has the corresponding Meinheld tool. Besides, the latter has better support for HTTP than Bjoern. Although Gevent is not the best performance in the asynchronous framework, it is definitely the most perfect, and the Community activity is also very high. In addition, the convenient monkey_patch, this allows most applications to easily translate without changing the code. The combination of the two ensures stability and better performance.


You can use Gunicorn + Gevent for simple extension, and use nginx for uwsgi or gunicorn combination if you want to solve the problem.


This article is from "Fengyun, it's her ." Blog, declined to reprint!

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.