Large flow of log if the direct write Hadoop to Namenode load, so the merge before storage, you can each node log together into a file to write HDFs. It is synthesized on a regular basis and written to the HDFs.
Let's look at the size of the log, 200G DNS log files, I compress to 18G, if you can use Awk Perl, of course, but the processing speed is certainly not distributed as the force.
The principle of Hadoop streaming
Mapper and reducer read the user data from standard input, one line at a line and then sent to standard output. The streaming tool creates mapreduce jobs, sends them to each tasktracker, and monitors the execution of the entire job.
Any language, as long as it is convenient to receive standard input and output can do mapreduce~
Before we do that, we'll simply test the performance of the shell simulation MapReduce.
Look at his results, 350M files are about 35 seconds.
This is the 2G log file, it took 3 minutes. Of course, and I wrote the script also has the problem, we are simulating mapreduce way, not call the shell under the Awk,gawk processing.
The speed of awk! Sure is very high-handed, when processing the log, I also like to use awk, but the difficulty of learning is a bit big, unlike other shell components so flexible and simple.
This is the official offer of two demo ~
map.py
1234567891011121314151617181920#!/usr/bin/env python "" "A more advanced Mapper, using Python iter
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.