Multiple solutions to improve the concurrent processing capability of the python web framework

Source: Internet
Author: User
Tags virtual environment virtualenv


Common Python deployment methods include:

Fcgi: Use the spawn-fcgi or tool provided by the Framework to generate listening processes for each project, and then interact with the http service wsgi: the mod_wsgi module of the http service is used to run various projects (Web applications or interfaces between simple and common Web servers in the Framework ). UWSGI is a tool that listens to the same port like php-cgi for unified management and load balancing. uWSGI neither uses the wsgi Protocol nor the fcgi protocol, instead, it creates a uwsgi protocol, which is said to be about 10 times faster than the fcgi protocol.


 


In fact, WSGI is divided into two parts: server and framework (application) (of course, there is also middleware ). Strictly speaking, WSGI is only a protocol that standardizes the interfaces connected between the server and framework.


The WSGI server exposes server functions using the WSGI interface. For example, mod_wsgi is a server that provides apache functions in the form of WSGI interfaces.


WSGI framework is the Django framework we often mention. However, it should be noted that there are few simple WSGI frameworks, and WSGI-based frameworks often come with WSGI servers. For example, Django and CherryPy both come with WSGI servers for test purposes, and the WSGI server in the production environment is used for release. Some WSGI frameworks, such as pylons and bfg, do not implement the WSGI server by themselves. Use paste as the WSGI server. Paste is a popular WSGI server with many middleware. Flup is also a library that provides middleware. To clear WSGI server and application, the middleware will naturally be clear. In addition to applications such as session and cache, a middleware under bfg was found to be used to skin the website (skin ). Middleware can be used in many ways. Here I will add how a framework like django runs on apache in fastcgi mode. This requires flup. fcgi or fastcgi. py (eurasia also designed a fastcgi. these tools convert the fastcgi protocol to the wsgi interface (Change fastcgi to a WSGI server) for the framework to access. The entire architecture is as follows: django-> fcgi2wsgiserver-> mod_fcgi-> apache. Although I am not a fan of WSGI, it is undeniable that WSGI is significant to python web. If you are interested in designing your own web framework and do not want to do socket layer or http packet parsing, you can design your own framework from WSGI. There is a consensus in the python circle that it is just as natural and convenient to build a web framework. Maybe every python player will go through an inverted framework.


UWSGI has the following features:

Ultra-fast performance.

Low memory usage is about half of mod_wsgi of apache2 ).

Multi-app management.

Detailed log functions can be used to analyze app performance and bottlenecks ).

Highly customizable memory size limit, which can be restarted after a certain number of times of service ).



Uwsgi official documentation:

Http://projects.unbit.it/uwsgi/wiki/Doc


Nginx. conf

location / {  include uwsgi_params  uwsgi_pass 127.0.0.1:9090}

Start app

uwsgi -s :9090 -w myapp


Uwsgi tuning parameters ~

The uwsgi parameters and above are the simplest deployment of a single project. uwsgi still has many commendable features, such as four concurrent threads: uwsgi-s: 9090-w myapp-p 4 main control thread + 4 threads: uwsgi-s: 9090-w myapp-M-p 4 the client that has been executed for more than 30 seconds directly gives up: uwsgi-s: 9090-w myapp-M-p 4-t 30 limited memory space 128 M: uwsgi-s: 9090-w myapp-M-p 4-t 30 -- limit-as 128 service more than 10000 req automatic respawn: uwsgi-s: 9090-w myapp-M-p 4-t 30 -- limit-as 128-R 10000 background run and so on: uwsgi-s: 9090-w myapp-M-p 4-t 30 -- limit-as 128-R 10000-d uwsgi. log



To allow multiple sites to share a uwsgi service, you must run uwsgi as a virtual site: Remove "-w myapp" and "-vhost ":


Uwsgi-s: 9090-M-p 4-t 30 -- limit-as 128-R 10000-d uwsgi. log -- vhost

Virtualenv must be configured. virtualenv is a useful virtual environment tool for Python:



Finally, configure nginx. Note that each site must occupy one server separately. The same server is directed to different applications in different locations and may fail for some reason. This is a bug.

server {    listen       80;    server_name  app1.mydomain.com;    location / {            include uwsgi_params;            uwsgi_pass 127.0.0.1:9090;            uwsgi_param UWSGI_PYHOME /var/www/myenv;            uwsgi_param UWSGI_SCRIPT myapp1;            uwsgi_param UWSGI_CHDIR /var/www/myappdir1;     }}server {    listen       80;    server_name  app2.mydomain.com;    location / {            include uwsgi_params;            uwsgi_pass 127.0.0.1:9090;            uwsgi_param UWSGI_PYHOME /var/www/myenv;            uwsgi_param UWSGI_SCRIPT myapp2;            uwsgi_param UWSGI_CHDIR /var/www/myappdir2;    }}


In this way, restart the nginx service and the two sites will be able to share a uwsgi service.


Try fastcgi again


location / {        fastcgi_param REQUEST_METHOD $request_method;        fastcgi_param QUERY_STRING $query_string;        fastcgi_param CONTENT_TYPE $content_type;        fastcgi_param CONTENT_LENGTH $content_length;        fastcgi_param GATEWAY_INTERFACE CGI/1.1;        fastcgi_param SERVER_SOFTWARE nginx/$nginx_version;        fastcgi_param REMOTE_ADDR $remote_addr;        fastcgi_param REMOTE_PORT $remote_port;        fastcgi_param SERVER_ADDR $server_addr;        fastcgi_param SERVER_PORT $server_port;        fastcgi_param SERVER_NAME $server_name;        fastcgi_param SERVER_PROTOCOL $server_protocol;        fastcgi_param SCRIPT_FILENAME $fastcgi_script_name;        fastcgi_param PATH_INFO $fastcgi_script_name;        fastcgi_pass 127.0.0.1:9002;}


location /static/ {        root /path/to/www;        if (-f $request_filename) {           rewrite ^/static/(.*)$  /static/$1 break;        }    }


Start a fastcgi Process


spawn-fcgi -d /path/to/www -f /path/to/www/index.py -a 127.0.0.1 -p 9002


Test A small demo written in web. py

#!/usr/bin/env python# -*- coding: utf-8 -*-import weburls = ("/.*", "hello")app = web.application(urls, globals())class hello:    def GET(self):        return 'Hello, world!'if __name__ == "__main__":    web.wsgi.runwsgi = lambda func, addr=None: web.wsgi.runfcgi(func, addr)    app.run()


Start nginx

nginx

So OK ~



The following describes my general methods:

650) this. width = 650; "src =" http://img1.51cto.com/attachment/201306/012055220.jpg "title =" nginxpython.jpg "/>


Frontend nginx is responsible for load distribution:


During deployment, a single IP address and multiple ports are used. The server has four cores and it is decided to enable four ports for the corresponding ports, which are 8885-respectively ~ 8888, modify



upstream backend {        server 127.0.0.1:8888;        server 127.0.0.1:8887;        server 127.0.0.1:8886;        server 127.0.0.1:8885;} server{        listen  80;        server_name message.test.com;        keepalive_timeout 65;    #        proxy_read_timeout 2000; #        sendfile on;        tcp_nopush on;        tcp_nodelay on;    location / {        proxy_pass_header Server;        proxy_set_header Host $http_host;        proxy_redirect off;        proxy_set_header X-Real-IP $remote_addr;        proxy_set_header X-Scheme $scheme;        proxy_pass  http://backend;        }}


Then run four python programs. The port is the configured port.

Here I use tornado to write an example of executing a system program:

import subprocessimport tornado.ioloopimport timeimport fcntlimport functoolsimport osclass GenericSubprocess (object):    def __init__ ( self, timeout=-1, **popen_args ):        self.args = dict()        self.args["stdout"] = subprocess.PIPE        self.args["stderr"] = subprocess.PIPE        self.args["close_fds"] = True        self.args.update(popen_args)        self.ioloop = None        self.expiration = None        self.pipe = None        self.timeout = timeout        self.streams = []        self.has_timed_out = False    def start(self):        """Spawn the task.        Throws RuntimeError if the task was already started."""        if not self.pipe is None:            raise RuntimeError("Cannot start task twice")        self.ioloop = tornado.ioloop.IOLoop.instance()        if self.timeout > 0:            self.expiration = self.ioloop.add_timeout( time.time() + self.timeout, self.on_timeout )        self.pipe = subprocess.Popen(**self.args)        self.streams = [ (self.pipe.stdout.fileno(), []),                             (self.pipe.stderr.fileno(), []) ]        for fd, d in self.streams:            flags = fcntl.fcntl(fd, fcntl.F_GETFL)| os.O_NDELAY            fcntl.fcntl( fd, fcntl.F_SETFL, flags)            self.ioloop.add_handler( fd,                                     self.stat,                                     self.ioloop.READ|self.ioloop.ERROR)    def on_timeout(self):        self.has_timed_out = True        self.cancel()    def cancel (self ) :        """Cancel task execution        Sends SIGKILL to the child process."""        try:            self.pipe.kill()        except:            pass    def stat( self, *args ):        '''Check process completion and consume pending I/O data'''        self.pipe.poll()        if not self.pipe.returncode is None:            '''cleanup handlers and timeouts'''            if not self.expiration is None:                self.ioloop.remove_timeout(self.expiration)            for fd, dest in  self.streams:                self.ioloop.remove_handler(fd)            '''schedulle callback (first try to read all pending data)'''            self.ioloop.add_callback(self.on_finish)        for fd, dest in  self.streams:            while True:                try:                    data = os.read(fd, 4096)                    if len(data) == 0:                        break                    dest.extend([data])                except:                    break    @property    def stdout(self):        return self.get_output(0)    @property    def stderr(self):        return self.get_output(1)    @property    def status(self):        return self.pipe.returncode    def get_output(self, index ):        return "".join(self.streams[index][1])    def on_finish(self):        raise NotImplemented()class Subprocess (GenericSubprocess):    """Create new instance    Arguments:        callback: method to be called after completion. This method should take 3 arguments: statuscode(int), stdout(str), stderr(str), has_timed_out(boolean)        timeout: wall time allocated for the process to complete. After this expires Task.cancel is called. A negative timeout value means no limit is set    The task is not started until start is called. The process will then be spawned using subprocess.Popen(**popen_args). The stdout and stderr are always set to subprocess.PIPE.    """    def __init__ ( self, callback, *args, **kwargs):        """Create new instance        Arguments:            callback: method to be called after completion. This method should take 3 arguments: statuscode(int), stdout(str), stderr(str), has_timed_out(boolean)            timeout: wall time allocated for the process to complete. After this expires Task.cancel is called. A negative timeout value means no limit is set        The task is not started until start is called. The process will then be spawned using subprocess.Popen(**popen_args). The stdout and stderr are always set to subprocess.PIPE.        """        self.callback = callback        self.done_callback = False        GenericSubprocess.__init__(self, *args, **kwargs)    def on_finish(self):        if not self.done_callback:            self.done_callback = True            '''prevent calling callback twice'''            self.ioloop.add_callback(functools.partial(self.callback, self.status, self.stdout, self.stderr, self.has_timed_out))if __name__ == "__main__":    ioloop = tornado.ioloop.IOLoop.instance()    def print_timeout( status, stdout, stderr, has_timed_out) :        assert(status!=0)        assert(has_timed_out)        print "OK status:", repr(status), "stdout:", repr(stdout), "stderr:", repr(stderr), "timeout:", repr(has_timed_out)    def print_ok( status, stdout, stderr, has_timed_out) :        assert(status==0)        assert(not has_timed_out)        print "OK status:", repr(status), "stdout:", repr(stdout), "stderr:", repr(stderr), "timeout:", repr(has_timed_out)    def print_error( status, stdout, stderr, has_timed_out):        assert(status!=0)        assert(not has_timed_out)        print "OK status:", repr(status), "stdout:", repr(stdout), "stderr:", repr(stderr), "timeout:", repr(has_timed_out)    def stop_test():        ioloop.stop()    t1 = Subprocess( print_timeout, timeout=3, args=[ "sleep", "5" ] )    t2 = Subprocess( print_ok, timeout=3, args=[ "sleep", "1" ] )    t3 = Subprocess( print_ok, timeout=3, args=[ "sleepdsdasdas", "1" ] )    t4 = Subprocess( print_error, timeout=3, args=[ "cat", "/etc/sdfsdfsdfsdfsdfsdfsdf" ] )    t1.start()    t2.start()    try:        t3.start()        assert(false)    except:        print "OK"    t4.start()    ioloop.add_timeout(time.time() + 10, stop_test)    ioloop.start()




You can use uwsgi first. If there is still pressure or congestion, you can use nginx for load.

In my own experience, this is still reliable ~


This article is from "Fengyun, it's her ." Blog, please be sure to keep this source http://rfyiamcool.blog.51cto.com/1030776/1227629

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.