Flume is an excellent data acquisition component, some heavyweight, its nature is based on the query results of SQL statements assembled into OPENCSV format data, the default separator symbol is a comma (,), you can rewrite opencsv some classes to modify
1, download
[Root@hadoop0 bigdata]# wget http://apache.fayea.com/flume/1.6.0/apache-
Flume and Kakfa example (KAKFA as Flume sink output to Kafka topic)To prepare the work:$sudo mkdir-p/flume/web_spooldir$sudo chmod a+w-r/flumeTo edit a flume configuration file:$ cat/home/tester/flafka/spooldir_kafka.conf# Name The components in this agentAgent1.sources = We
support), EXEC (command execution) The ability to collect data on a data source is currently used by exec in our system for log capture. Flume data recipients, which can be console (console), text (file), DFS (HDFs file), RPC (THRIFT-RPC), and syslogtcp (TCP syslog log system), and so on. It is received by Kafka in our system. Flume version: 1.4.0
Original link: Kafka combat-flume to KAFKA1. OverviewIn front of you to introduce the entire Kafka project development process, today to share Kafka how to get the data source, that is, Kafka production data. Here are the directories to share today:
Data sources
Flume:Flume data source and output mode:Flume provides 2 modes from console (console), RPC (THRIFT-RPC), text (file), tail (UNIX tail), syslog (syslog log system, TCP and UDP support), EXEC (command execution) The ability to collect data on a data source is currently used by exec in our system for log capture.Flume data recipients, which can be console (console), text (file), DFS (HDFs file), RPC (THRIFT-RPC), and syslogtcp (TCP syslog log system), and so on. It is received by
of various data senders in the log system and collects data, while Flume provides simple processing of data and writes to various data recipients (customizable) capabilities. typical architecture for flume:flume data source and output mode:Flume provides 2 modes from console (console), RPC (THRIFT-RPC), text (file), tail (UNIX tail), syslog (syslog log system, TCP and UDP support), EXEC (command execution) The ability to collect data on a data source
), tail (UNIX tail), syslog (syslog log System, Support 2 modes such as TCP and UDP, exec (command execution) and other data sources on the ability to collect data, in our system is currently using the Exec method of log capture.Flume data recipients, which can be console (console), text (file), DFS (HDFs file), RPC (THRIFT-RPC), and syslogtcp (TCP syslog log system), and so on. It is received by Kafka in our system.Flume Download and Documentation: H
(console), RPC (THRIFT-RPC), text (file), tail (UNIX tail), syslog (syslog log System, Support 2 modes such as TCP and UDP, exec (command execution) and other data sources on the ability to collect data, in our system is currently using the Exec method of log capture.Flume data recipients, which can be console (console), text (file), DFS (HDFs file), RPC (THRIFT-RPC), and syslogtcp (TCP syslog log system), and so on. It is received by Kafka in our sy
1. Background information
Many of the company's platforms generate a large number of logs (typically streaming data, such as the PV of search engines, queries, etc.), which require a specific log system, which in general requires the following characteristics:
(1) Construct the bridge of application system and analysis system, and decouple the correlation between them;
(2) support the near real-time on-line analysis system and the off-line analysis system similar to Hadoop;
(3) with high scalabi
Use flume + kafka + storm to build a real-time log analysis system. Using flume + kafka + storm to build a real-time log analysis system this article only involves the combination of flume and kafka. for the combination of
Flume can handle changes to the topology without losing data, and can tolerate periodic reconfiguration due to failover. But if you change the topology every day, then Flume will not work well, because reconfiguration will incur overhead. In short, there are two points:1, data. The data is generated on a regular basis.2, the network topology is relatively stable.Kafka,
localhost:2181__consumer_offsetstest7. Test send data #bin/kafka-console-producer.sh--broker-list localhost:9092--topic Test input: My test8. Test consumer message: #bin/kafka-console-consumer.sh--bootstrap-server localhost : 9092--topic Test--from-beginning installation Deployment Flumeflume Download: downloadapache Flume is distributed under the
Implementation Architecture
A scenario implementation architecture is shown in the following illustration:
Analysis of 3.1 producer layer
Service assumptions within the PAAs platform are deployed within the Docker container, so to meet non-functional requirements, another process is responsible for collecting logs, thus not intruding into service frameworks and processes. Using flume ng for log collection, this open source component is very powerful
The previous introduction of how to use thrift source production data, today describes how to use Kafka sink consumption data.In fact, in the Flume configuration file has been set up with Kafka sink consumption dataAgent1.sinks.kafkaSink.type =Org.apache.flume.sink.kafka.KafkaSinkagent1.sinks.kafkaSink.topic=TRAFFIC_LOGagent1.sinks.kafkaSink.brokerList=10.208.129
1. Development environment 1.1. Package Download 1.1.1. JDKHttp://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.htmlInstall to the D:\GreenSoftware\Java\Java8X64\jdk1.8.0_91 directory 1.1.2. Mavenhttps://maven.apache.org/download.cgiUnzip to the D:\GreenSoftware\apache-maven-3.3.9 directory 1.1.3. Scalahttps://www.scala-lang.org/download/Unzip to the D:\GreenSoftware\Java\scala-2.12.6 directory 1.1.4. ThriftHttp://thrift.apac
Flume: Used to collect logs and transfer logs to KAKFAKafka: As a cache, store logs from FlumeES: As a storage medium, store logsLogstash: True filtering of logsFlume deploymentGet the installation package, unzip1 wget http://10.80.7.177/install_package/apache-flume-1.7.0-bin.tar.gz tar ZXF apache-
scribe, Chukwa, Kafka, flume log System comparison1. Background informationMany of the company's platforms generate a large number of logs per day (typically streaming data, such as search engine PV, queries, etc.), processing these logs requires a specific logging system, in general, these systems need to have the following characteristics: (1) Build the bridge of application system and analysis system, an
Deployment Readiness
Configure the Log collection system (FLUME+KAFKA), version:
apache-flume-1.8.0-bin.tar.gz
kafka_2.11-0.10.2.0.tgz
Suppose the Ubuntu system environment is deployed in three working nodes:
192.168.0.2
192.168.0.3
192.168.0.4Flume Configuration Instructions
Suppose Flume's working directory is in/us
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.