Introduction to the offline data analysis process.
3.OfflineData AnalysisProcess Introduction
Note: This section describes the macro concepts and processing processes of the data analysis system, and provides a preliminary understanding of the application of hadoop and other frameworks. You do not need to pay too much attention to code details.
A widely used data analysis system: "web Log Data Mining"
3.1 Requirement Analysis
3.1.1 case name
"Website or APP click stream Log Data Mining System ".
3.1.2 case Requirement Description
"Web click stream log" contains important information about website operations. Through log analysis, we can know the website's access traffic, which webpage has the most visitors and which webpage has the most valuable value, AD conversion rate, visitor source information, and visitor terminal information.
3.1.3 Data Source
The data in this case is mainly composedUser clicks
Obtaining method: embed a js program on the page to bind events to tags on the page. As long as the user clicks or moves to tags, ajax requests can be triggered to the background servlet program, use log4j to record event information, so as to form a growing log file on the web server (such as nginx and tomcat.
Shape:
58.215.204.118--[18/Sep/2013: 06: 51: 35 + 0000] "GET/wp-nodes des/js/jquery. js? Ver = 1.10.2 HTTP/1.1 "304 0" http://blog.fens.me/nodejs-socketio-chat/ "" Mozilla/5.0 (Windows NT 5.1; rv: 23.0) Gecko/20100101 Firefox/23.0" |
3.2 Data Processing Process
3.2.1 flowchart analysis
This case is very similar to a typical BI system. The overall process is as follows:
However, because the premise of this case is to process massive data, the technology used in each step of the process is completely different from that of traditional BI. The subsequent courses will be explained one by one:
1) Data collection: custom development of collection programs, or use the open-source framework FLUME
2) data preprocessing: custom development mapreduce program runs on hadoop Cluster
3) Data Warehouse Technology: Hive based on hadoop
4) data export: hadoop-based sqoop data import and export tool
5) data visualization: custom development of web programs or kettle and other products
6) process scheduling throughout the process: oozie tools in the hadoop ecosystem or other similar open-source products
3.2.2 project technical architecture
3.2.3 project-related (perceptual knowledge and appreciation)
A) Mapreudce program running
B) query data in Hive
C) import the statistical results to mysql
. /Sqoop export -- connect jdbc: mysql: // localhost: 3306/weblogdb -- username root -- password root -- table t_display_xx -- export-dir/user/hive/warehouse/uv/dt = |
3.3 project final effect
After a complete data processing process, reports of various statistical indicators are periodically output. in production practice, the report data must be displayed visually, this case uses web programs for data visualization
The effect is as follows: