Log Monitoring System (ICC copy) preface: The Age of the university, the Good times. Know the wow~, not the level of the Ashes players. (Level 80 starts to play.) Played the FB inside feel the ICC copy is best to play. Undead FS side dish than one. The
Initial issues to solve: 1. For achievement ~ (the current project's journal uses the liunx grep command, which executes a log of the read item once in 3 minutes.) Cons: Non-real-time, take up 1 CPUs, full of 100%~~) 2. Good want frost sad. (The
apache-flume1.6 Sink Default Support Kafka
[FLUME-2242]-FLUME Sink and Source for Apache Kafka
The official example is very intimate, you can directly run =,=, detailed configuration after a slow look.
A1.channels = Channel1A1.sources = src-1A1.sinks = K1
A1.sources.src-1.type = Spooldi
This course is based on the production and flow of real-time data, through the integration of the mainstream distributed Log Collection framework flume, distributed Message Queuing Kafka, distributed column Database HBase, and the current most popular spark streaming to create real-time stream processing project combat, Let you master real-time processing of the entire processing process, to reach the level
Last time Flume+kafka+hbase+elk:http://www.cnblogs.com/super-d2/p/5486739.html was implemented.This time we can add storm:storm-0.9.5 simple configuration is as follows:Installation dependencieswget http://download.oracle.com/otn-pub/java/jdk/8u45-b14/jdk-8u45-linux-x64.tar.gztar ZXVF jdk-8u45-linux-x64.tar.gzcd jdk-8u45-linux-/etc/profileAdd the following: Export Java_home =/home/dir/jdk1. 8 . 0_45export C
final Logger log = Loggerfactory.getlogger (cmcckafkasink.class);p ublic static Final string key_hdr = "KEY";p ublic static final String topic_hdr = "TOPIC";p rivate static final String CHARSET = "UTF-8"; Private Properties kafkaprops;private producerThen mvn clean install to compile the package jar, throw the jar package into the Flume installation directory of the Lib can be, the following is the editing conf fileOf course, the key of the specific
Background: The data volume of the system is more and more large, the log can not be a simple file save, so the log will be more and more large, not easy to find and analysis, comprehensive consideration of the use of flume to collect logs, collect logs to Kafka delivery message, the following gives the specific configuration#the configuration file needs to define the sources,# The Channels andThe sinks.# S
#the name of sourceAgent.sources =Kafkasource#the name of channels, which is suggested to be named according to typeAgent.channels =Memorychannel#Sink's name, suggested to be named according to the targetAgent.sinks =Hdfssink#Specifies the channel name used by SourceAgent.sources.kafkaSource.channels =Memorychannel#Specify the name of the channel that sink needs to use, Note that this is the channelAgent.sinks.hdfsSink.channel =Memorychannel#--------kafkasource related configuration-------------
Simple test Project: 1, the new Java project structure is as follows:
The test class Flumetest code is as follows:
Package com.demo.flume;
Import Org.apache.log4j.Logger;
public class Flumetest {
private static final Logger Logger = Logger.getlogger (flumetest.class);
public static void Main (string[] args) throws Interruptedexception {for
(int i = i
Listen Kafka receive message consumer code as follows:
Package com.demo.flu
Big Data Architecture Development mining analysis Hadoop HBase Hive Storm Spark Flume ZooKeeper Kafka Redis MongoDB Java cloud computing machine learning video tutorial, flumekafkastorm
Training big data architecture development, mining and analysis!
From basic to advanced, one-on-one training! Full technical guidance! [Technical QQ: 2937765541]
Get the big data video tutorial and training address
Byt
Training Big Data Architecture development!from zero-based to advanced, one-to-one training! [Technical qq:2937765541]--------------------------------------------------------------------------------------------------------------- ----------------------------Course System:get video material and training answer technical support addressCourse Presentation ( Big Data technology is very wide, has been online for you training solutions!) ):get video material and training answer technical support ad
# Flume Test File# listens via Avro RPC on port 41414 and dumps data received to the logAgent.channels = ch-1Agent.sources = src-1Agent.sinks = sink-1Agent.channels.ch-1.type = MemoryAgent.channels.ch-1.capacity = 10000000agent.channels.ch-1.transactioncapacity = 1000Agent.sources.src-1.type = AvroAgent.sources.src-1.channels = ch-1Agent.sources.src-1.bind = 0.0.0.0Agent.sources.src-1.port = 41414Agent.sinks.sink-1.type = LoggerAgent.sinks.sink-1.chan
Training Big Data architecture development, mining and analysis!from zero-based to advanced, one-to-one training! [Technical qq:2937765541]--------------------------------------------------------------------------------------------------------------- ----------------------------Course System:get video material and training answer technical support addressCourse Presentation ( Big Data technology is very wide, has been online for you training solutions!) ):Get video material and training answer
Training Big Data Architecture development!from zero-based to advanced, one-to-one training! [Technical qq:2937765541]--------------------------------------------------------------------------------------------------------------- ----------------------------Course System:get video material and training answer technical support addressCourse Presentation ( Big Data technology is very wide, has been online for you training solutions!) ):get video material and training answer technical support ad
1. All hosts need to install JDK and configure JDK environment variable 2, all the host installed SSH, and each other to achieve no secret access 3, modify the host hosts: File/etc/hosts, to ensure that the machine through the machine name can exchange visits 4. Install Python 2.6 and above (Storm ) 5, ZeroMQJava code
wget http://download.zeromq.org/zeromq-2.1.7.tar.gz
TAR-XZF zeromq-2.1. 7. tar.gz
CD zeromq-2.1. 7
./configure
Make
sudo make install
During
Big Data Architecture Development mining analysis Hadoop Hive HBase Storm Spark Flume ZooKeeper Kafka Redis MongoDB Java cloud computing machine learning video tutorial, flumekafkastorm
Training big data architecture development, mining and analysis!
From basic to advanced, one-on-one training! Full technical guidance! [Technical QQ: 2937765541]
Get the big data video tutorial and training address
Byt
Friends who like to learn can collectwilling to understand the framework of technology or source of friends directly add to beg: 2042849237 some of the distributed solutions, the friends who are willing to know can find our team to discussMore detailed source code references: http://minglisoft.cn/technology spring,springmvc,spring mvc,web Development, Java distributed architecture, Shiro,mybatis, KAFKA,J2EE Distributed ArchitectureKafka+zookeeper+
Label:Training Big Data architecture development, mining and analysis! From zero-based to advanced, one-to-one training! [Technical qq:2937765541] --------------------------------------------------------------------------------------------------------------- ---------------------------- Course System: get video material and training answer technical support address Course Presentation ( Big Data technology is very wide, has been online for you training solutions!) ): get video material and tr
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.