Nginx Log real-time monitoring system based on Storm

Source: Internet
Author: User
Tags memcached
Absrtact: Storm is hailed as the most fire flow-style processing framework, making up for many of the shortcomings of Hadoop, Storm is often used in real-time analysis, online machine learning, continuous computing, distributed remote invocation and ETL and other fields. In this paper, the Nginx log real-time monitoring system based on storm is introduced.

The drawbacks of "editor's note" Hadoop are also as stark as its virtues--large latency, slow response, and complex operation. is widely criticized, but there is demand for the creation, in Hadoop basically laid a large data hegemony, many of the open source project is to make up for the real-time nature of Hadoop as the goal is created, Storm is at this time turned out, Storm is a free open source, distributed, A highly fault-tolerant real-time computing system. Storm makes the continuous flow calculation easier, making up for the real-time requirements that Hadoop batches cannot meet.

The following is the original: the background of the UAE (UC App Engine) is a UC internal PAAs platform, the overall architecture is somewhat similar to Cloudfoundry, including:
Rapid deployment: Support Node.js, play!, PHP and other framework information transparent: operational dimension process, System State, business situation gray trial error: IP grayscale, geographical gray basic services: key-value storage, MySQL high availability, picture platform, etc.

Here it is not the protagonist, does not give detailed introduction.

There are hundreds of Web applications running on UAE, all requests are routed through UAE, Nginx access log size is terabytes per day, and how to monitor each business's access trends, ad data, page time, access quality, custom reports, and exception alarms in real-time.

Hadoop can meet the needs of statistics, but the real-time of the second level is not satisfied with the spark streaming and some of the overqualified, at the same time, we do not have spark engineering experience; self-write distributed program scheduling is troublesome and should consider the expansion, message flow;

Finally, our technical selection is storm: Relatively lightweight, flexible, easy to communicate, and flexible to expand.

In addition, because of UC cluster more, across the cluster log transmission is also a relatively large problem.

Technical preparation cardinality count (cardinality counting)

In large data distributed computing, PV (Page View) can be easily added together, but UV (Unique Visitor) is not.

In the case of distributed computing, hundreds of business, hundreds of thousands of URLs at the same time statistics of UV, if you want to be divided into hours of statistics (every minute/every 5 minutes merged/hourly merge/daily merge), memory consumption is unacceptable.

At this time, the power of probability is reflected. We can see in the probabilistic data structures for Web Analytics and Data mining that accurate hash table statistics UV and cardinality count memory comparisons are not an order of magnitude. The cardinality count allows you to implement a combination of UV, minimal memory consumption, and the error is entirely within acceptable limits.

You can understand the Loglog counting first, understand the premise of the uniform hashing method, the origin of the rough estimate can be skipped.

The specific algorithm is adaptive counting, and the computed base used is Stream-2.7.0.jar. Real-time Log transfer

Real-time computing must rely on real-time log transmissions at the second level, and the added benefit is to avoid network congestion caused by staged transmissions.

Real-time log transfer is a lightweight log transfer tool available in UAE, which is mature and stable and used directly, including client (MCA) and server Side (MCS).

The client listens to the changes in the log files of each cluster and transmits them to each machine in the specified storm cluster, which is stored as a normal log file.

We tuned the transmission strategy so that the log files on each storm machine were roughly the same size, so spout only read native data. Data source queues

We do not use storm commonly used queues, such as Kafka, Metaq, etc., mainly is too heavy ...

Fqueue is a lightweight memcached protocol queue that turns ordinary log files into memcached services so that storm spout can be read directly to Memcached protocols.

This data source is simpler, it does not support replay, a record is removed, and if a tuple processing fails or times out, the data is lost.

It is relatively lightweight, based on local file read, made a thin layer of cache, not a pure memory queue, its performance bottleneck is disk IO, the throughput per second and disk read speed is consistent. But for us this system is sufficient, subsequent plans to change into a pure memory queue. Architecture

Through the technical reserve above, we can get the user's log after a few seconds of the user's visit.


The overall structure is also relatively simple, the reason why there are two kinds of calculation bolt, is based on the calculation of uniform distribution considerations. The volume of the business varies greatly, and if only the business ID is fieldsgrouping, the computational resources will be unbalanced. Spout each original log standardized, according to the URL group (fieldsgrouping, to maintain the balance of the calculation of each server), distributed to the corresponding Stat_bolt; Stat_bolt is the main computing bolt, combing and computing the URLs of each business, such as PV , UV, total response time, back-end response time, HTTP status code statistics, URL sorting, traffic statistics, etc. merge_bolt the data of each business, such as PV number, UV number, etc. Of course, the UV merge here uses the cardinality count mentioned above; A simple coordinator coordination class, streamid labeled "Coordinator", Functions: Time Coordination (segmentation batch), check task completion, timeout processing. The principle is similar to the transactional Topolgoy with Storm. Implementation of a scheduler through the API to obtain parameters, dynamic adjustment of spout, bolt distribution in each server, in order to flexibly allocate server resources. Support for smooth upgrades topology: When a topology is upgraded, the new topology and the old topology speak at the same time, coordinating the switching time, when the new topology takes over Fqueue, ladder, kills the old topology.

Note The point: Storm machine as far as possible in the same cabinet, does not affect the bandwidth of the cluster; our Nginx log is divided by the hour, if the time is not accurate segmentation, in 00 minutes, you can see obvious data fluctuations, so, try to use Nginx module to cut the log, There is a delay in sending a signal with a crontab. Cut log This 10-second-level delay, in the large scale statistics, there is no problem, the second level of statistical fluctuations is very obvious; the heap is too small to cause woker to be forced to kill, so configure the-XMX parameter; custom static resource: Static resource filtering options, Filter specific static resources by content-type or suffixes. Resource merging: URL merging, such as restful resources, easy to display after merging; dimensions and metrics: through ANTLR v3 do syntax, lexical analysis, complete custom dimensions and metrics, and subsequent alarms also support custom expressions. Other

We also implemented in other ways: the business process level (cpu/mem/port) monitoring business-dependent services, such as mysql/memcached, such as the disk/memory/io/kernel Parameters/language environment/environment variables/compilation environment, such as monitoring the original link: Nginx Log real-time monitoring system based on Storm (Zebian/Wei Wei)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.