Basic distinguish between web. py/flup and Tornado Web process handling model (TBC)

Source: Internet
Author: User
Tags nginx server

Tornado is known for its capability of handling concurrent connections with help of OS event triggering mechanisms like epoll and kqueue.

Web. py is a Web Framework for python. It relies on other server packages to serve as a complete web server software.

When trying to setup, Tornado cocould be put to work on its own, while common setup is to put behind an nginx server (via proxy_pass) for handling static resources and other matters while leave Tornado to deal with dynamic requests after reverse proxy.

In contrast, Web. py usually requires flup to run as a FastCGI service and then is connected to nginx via fastcgi_pass derivatives.

Appears to a new user they are similar to some extend. I wrote a few very simple scripts [1]/[2] and tested them in the same server running behind the same nginx configuration. each running two processes (web. PY via spawn-fcgi, Tornado via tornado. process. fork_processes)
And returning simple string within get handler. In averageNginx + tornado gives 75-125 Ms serve time per requestsWhileNginx + web. py at '3sec per request, Both at 50 concurrent clients (AB-C50). With less concurrent clients
The time difference may be up to 10 times even.

Then I added minor delay in the get handler for both scripts (with time. sleep (0.1) to simulate some system processing time. I was dealing with relatively time consuming filesystem requests with my web service before starting looking at both solutions and
Therefore this simulation is quite similar to the kind of Prolem I am looking at. Surprisingly,Nginx + tornado script slowed to 5sec + per request and is much, much slower than Web. py.

I understood how tornado works, based on my understanding of epoll/IO multiplexing theories. however since Web. py is kind of a mistery I had to look into the source code. then I saw that the web. PY snippet called into flup for creating an "runwsgi" function,
Which in tern creates an threadedserver within flup. threadedserver had an addjob method which is so familiar looking, and within minute I cocould see that, for each client socket returned from the Select call (threadedserver. run), a new "job" hence a new thread
In pool is created. Legendary one thread per client model. Even without looking at how Web. py (and my code) was called back from flup, I know that:

  1. For those blocking CILS (either blocking I/O operations or other matters like the time. Sleep call here) are handled by threads/OS Scheduler
  2. With large amount of simple, non-blocking/once-off requests, they must be slower than epoll approach.

However when blocking operations appear (such as my sleep call, filesystems, db cils, etc), epoll will not help. OS will wait for such operations to finish before returning to the script. since there are only two tornado processes running, there can only be
No more than two instances of clients being served at the same time, even both are sleeping. with flup, threads are created and scheduled by the OS therefore they cocould be scheduled to run as long as CPU isn' t completely hogged.

If we look at the packages available to Tornado, apart from the server package, there are HTTP client packages, async MongoDB packages and some authentication packages built around the HTTP client package. we cocould clearly see that, to better utilized tornado,
Application need to better use the epoll/ioloop as the core of application. tornado framework handles all network waiting time (using epoll) and carefully crafted apps wo'd then response to all events in a timely manner. it's very different from the traditional
CGI style of request handling, but it's definitely towards the right direction.

Issues left over:

1.

Tornado didn't have async MySQL package available and friendfeed (original author) mentioned [3] That
We experimented with different async dB approaches, but settled on
Synchronous at friendfeed because generally if our DB queries were
Backlogging our requests, our backends couldn't scale to the load
Anyway. Things that were slow enough were was acted to separate
Backend services which we fetched asynchronously via the async HTTP
Module.
Question: How to better arrange resources to run other services to handling blocking services? Upon what principles design decitions shocould be made?

2.

When testing response speed of Tornado raw (without nginx) using AB shipped with OS X ml request failed from time to time. saw mentioning that these are caused by bugs in the version of AB shipped with OS X. shocould re-test with palb (Python Implemetation
Of AB) or other implementations.

BUG: http://simon.heimlicher.com/articles/2012/07/08/fix-apache-bench-ab-on-os-x-lion

Test with palb, with or without set_header ('connection', 'keep-alive ') Such conn reset errors is not presenting.

3.

Nginx speaks HTTP/1.0 when used as (reverse) proxy server, which closes connection upon each request. How does this affect the performance of Tornado server? I suppose epoll is designed for Comet usages (large number of stale connections )?

Answer:Nginx actually support HTTP/1.1 and keepalive for upstream proxy settings. See http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive

[4] mentioned using haproxy instead of nginx. Might worth looking.

4.

With the tornado code specified, even there are two Python processes running after starting the server, there is only one accepting requests. possible solutions: may still need to manage running on two ports and load balance with nginx but it's not ideal.
Fork_processes model shocould have made its way around this problem.

Solution:

Fork_process (0)/start (0) creates worker processes based on CPU number in system. observed two Python processes means only one worker process is created-therefore only one process is running request handlers. testing VM was a single core system. specifying
Start (2) results in 3 Python processes and two are sharing the load.

Links:

[1] web. py test Script: https://gist.github.com/4371628

[2] tornado test Script: https://gist.github.com/4363542

[3] http://news.ycombinator.com/item? Id = 3025475

[4] "need help on putting tornado apps on production", great info packed-https://groups.google.com/forum? Fromgroups = #! Topic/Python-Tornado/62tlw_gmp94

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.