Improve Web service response time by optimizing log printing

Source: Internet
Author: User

Original url:http://blog.csdn.net/u012859646/article/details/16840301 [2012.3] server-side development optimization (regardless of the front-end optimization factors), I believe that people can tell the point of the word. Optimization, there are a lot of metrics, only from the server side, can have at least concurrency, the corresponding time, processing time, throughput, server load and so on, and these factors sometimes interact to improve a certain indicator, other indicators may fall. For example, if you reduce the processing time of a request (by introducing a more complex CPU-intensive algorithm), the load on the machine may rise, and the throughput rate will decrease. It can also be categorized from the user experience and server aspects, that is, whether an optimization is to make the user feel faster (page open), or let the machine process more quickly. These factors are not equal, so optimization is a process that requires a multi-party tradeoff.
This article is nothing but a small tip, random thoughts with the things just, there is no data or experiments, at most, a "technical prose." Inevitably "form scattered God scattered" suspicion, so everyone will also see is.
Server-side development, logging is always unavoidable. The general practice is to log the logs to the local file system. This naturally to write files, write files will involve the file Io, and IO operation is a very slow process, so you hit the log more or less to take some time. Especially for the just-on-line service, often will log open to the debug level, a request 10 logs are not uncommon, so, all the time to log to spend a few milliseconds or even dozens of milliseconds. Under the condition of insufficient memory and high pressure, the condition will worsen. Give a webserver general workflow. The extremely simplified version is as follows

  Parserequest (); 
  LOGSOMETHING0 ();
  RequestBackServer1 ();
  LogSomething1 ();
  RequestBackServer2 ();
  LogSomething2 ();
  Renderpage ();
  LogSomething3 (); 
  Displaypage ();
  LogSomething4 ();

This is a Web service processing process, if it is a business logic module, the difference is not small, but parserequest processing is the request package, and Displaypage to the requested client to send a reply packet.          Regardless of the processing time of the actual business, the client can get a response at least to wait for the 4 log to print time. If we remember that the processing time for a request is m, and the client waits for a response of N. Regardless of the network transmission time, M>=n. We think from the client's user experience, naturally want this n as small as possible, so that users "feel" up faster.
To do this, we can consider delaying the log IO so that it happens after the client gets a response. In the example above, we can consider logging after dispaypage. In this way, the time consumption of log io is removed from the original n.
Before the logsomething[0-3]. Only need to optimize the implementation, in the function simply "remember without hitting" (that is, not output to the file). In detail, you simply log the contents of each log to a queue (in memory) in the form of a string. After the response is complete, output all of the logs to the file at once (this can theoretically improve IO efficiency). For languages that support OOP, consider putting the last log operation into the destructor of an object, omitting the log function that was explicitly called at the last step.
Above, a little tip almost finished. The object it optimizes is not the processing time, but the response time, which brings a "faster" experience to the user. It is very simple to implement such a log library in scripting language, and the effect can be expected.
The last question, maybe someone in the language of C + + and other languages development services, before the log can be printed out because of serious errors caused by coredump, in the original way, perhaps (can only say maybe), can be seen from the file log point of view, but now may not be. The solution is to put the non-output information in the global buffer, as long as the global variable is not written bad, through the artifact gdb can still view the legacy log from the Coredump.

2012.3--end--

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.