From the recommendation of a good ror Deployment Solution Performance Evaluation
Blog type:
Railsnginxlighttpdapachegoogle at the beginning of this year, I wrote an in-depth analysis of the ROR deployment solution, and analyzed the advantages and disadvantages of various deployment solutions under the rails process running mode, and the topic on which the deployment scheme is optimal. At that time, I did not provide specific performance evaluation data because I felt that the operating mechanism was clear and there was no need for evaluation. However, in any case, a detailed performance evaluation data is more convincing, so I am very pleased to see how valuable this evaluation report of shiningray has.
Analysis on Ruby on Rails Deployment Scheme
Shiningray's blog post
In this evaluation report, shiningray provides more mainstream deployment solutions, detailed analysis, and a wide range of test data, which can be regarded as a set of performance tests for ror deployment solutions. So there is nothing to say, it is strongly recommended to read! Of course, before reading this document, I suggest you read the in-depth analysis of the ROR Deployment Solution I wrote at the beginning of the year, which will help you better understand the evaluation report of shinningray.
Reference the evaluation conclusion section:
Shiningray writes that the three Lighttpd solutions occupy the top three. Lighttpd + FastCGI is the best deployment method. This method improves the performance by up to 50% compared to another popular nginx + mongrel method! The benefits of FastCGI are as follows:
- Binary protocol, no HTTP resolution required
- Establish persistent connections with the front end
- No overhead for lock and context switching
In addition, the advantage of Lighttpd over nginx lies in the large receiving buffer for requests and responses, saving the overhead of Multiple receiving and sending requests.
The performance column of the Lighttpd + thin method is the third place, which seems unexpected, but is actually because Lighttpd 1.5 supports setting up an HTTP keepalive link for the HTTP backend. In separate tests on the backend, The keepalive test performance of thin in small concurrency is not inferior to that of FastCGI. At the same time, thin implements non-blocking Io while FastCGI is blocking. On the contrary, neither haproxy nor nginx supports HTTP keepalive.
The swiftiply method also shows strong performance, thanks to its special structure of "allowing the backend to actively connect to swiftiply.
Currently, the Paster deployment method, which has received much attention, does not show any special performance advantages in this case. However, if the number of concurrent connections is less than 300, then, the average number of responses per second in apache2.2/prefork + passenger deployment mode is increased to 204.03. In this way, if you perform some Optimization Configuration for Apache, it is still an efficient deployment solution. At the same time, passenger is the easiest way to configure, and it is very satisfactory to achieve this effect.
Haproxy + mongrel and limit the number of links to 1 is a stable and conservative deployment method. Although the performance is not outstanding here, the stability is very good.
Finally, all three nginx-related solutions are listed at the end of the list, due to the lack of some advanced features and features of rails, nginx Reverse Proxy Server Load balancer is not suitable for deploying rails programs separately:
- The lack of a limit on the number of connections to the backend server causes mongrel to consume time in context switching and lock contention when receiving a large number of requests.
- Lack of the ability to establish persistent connections to the backend server, resulting in additional overhead for opening, establishing, and disabling links.