Analysis of first-time Python deployment problems

Source: Internet
Author: User

Before starting this part, I also want to talk about the state of a topic that we have been talking about and will continue to talk about in the future. We have been talking about how to save the user status in a centralized place, especially when Python is deployed in a large-scale cluster.

The same is true for django. It can be said that this golden rule is not only for a certain language or framework, but should be a high-level idea. So where can we put the status? Currently, some popular options are DB (memory table, or entity table), memcached, or cookie.

However, these options are not interchangeable. For example, when there is a large amount of business data, it is not suitable to store them in cookies because they may exceed the cookie size limit, it is a pity that memcached (with slab) also has its own restrictions.

If the status data size span is large, data loss may occur. ahuaxuan encountered this situation in the test environment a long time ago, because the online memcached is large, this is not the case. The internal cause of this event has been described in another ahuaxuan article.

Put it on the DB, obviously, the pressure on the DB is also one of the issues we need to consider. Of course, in addition to these mainstream options, there are many other options, such as memcachedb, timesten, or others. However, when status data is important, we must thoroughly study and understand the storage technology of status data. Otherwise, we may encounter unexpected situations.

For example, a long time ago, I thought that memcached was LRU for a slab (and I also want to insert a note that LRU is not actually traversing the chunk linked list in slab, and only traverse the first 50 pieces of data. This is purely for speed ). Currently, there are basically two Python deployment policies for django. The first is to use mod_python to run django in the apache process, and the other is webserver + fastcgi, both methods have their own advantages and disadvantages.

In the mod_python mode, our webserver must use apache. apache has been the first in the webserver field for many years, and its market share is far higher than that of other webservers. However, in recent years, several other webservers have emerged, including ligttpd and nginx.

They all pose a challenge to apache with high performance and low memory consumption, while mod_python is an apache plug-in. In this way, our webserver is limited to apache, however, apache + mod_python is also a very stable solution.

The second type is webserver + fastcgi. The webserver here can be selected at will. Most webservers provide support for fastcgi, such as the lighttpd and nginx that we are familiar, in addition, it is said that in many cases, FastCGI can provide better security and performance than mod_python.

FastCGI is more lightweight than Apache for small websites. It is said that qq's personal space is implemented by c ++ and fastcgi. Oh, where is the advantage of this? The processing speed of c ++ will be very fast, that is to say, each fastcgi process a request very quickly.

For example, it takes 50 milliseconds to use python, and it may take 20 milliseconds for c ++ to process the request. This example is not necessarily accurate, just to illustrate the features of fastcgi ), although c ++ is a little more troublesome in development, in terms of performance, c ++ must be no1. From this example, we can see that, the speed of fastcgi depends on the speed at which a request is processed ).

Let's take a look at the general mode of fastcgi:
1. The WEB server receives a page request from the client.
2. The WEB server delegates this page request to a FastCGI external process. The WEB server uses socket to connect and communicate with FastCGI)
3. the FastCGI external process processes the page request information delegated by the WEB server and returns the dynamic page content of the processing result to the WEB server.
4. The Web server forwards the result returned by FastCGI to the client browser.

For us, step 1 is our most important concern, because the speed of step 2 seriously affects the overall performance. Because fastcgi is based on processes, we need to start a suitable number of fastcgi processes based on our applications. Opening more is a waste of resources.

If you do not enable it, the performance will be affected. This is similar to how we enable the thread for processing requests in Python deployment, but the request handler thread in tomcat is obviously more convenient to configure, because we only need to focus on the maximum number of threads in the thread pool and the maximum number of Idle threads.

  1. Introduction to Python system files
  2. How to correctly use Python Functions
  3. Detailed introduction and analysis of Python build tools
  4. Advantages of Python in PythonAndroid
  5. How to Use the Python module to parse the configuration file?

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.