Big Data Architecture: Flume

Source: Internet
Author: User

1, Flume is a distributed, reliable, and highly available large-volume log aggregation system , to support the customization of various types of data senders in the system for data collection, while Flume provides simple processing of data and written to a variety of data recipients (customizable) ability.

2, an independent flume process called the agent, containing the component source, Channel, Sink source

Flume Infrastructure : Flume can capture data directly from a single node.

Internal implementation of Flume

Event: Event is the basic unit of Flume data transfer. The flume transmits data from the source to the final destination in the form of an event.

Source: Source is responsible for receiving events or generating events through special mechanisms and placing events in batches into one or more channels. Flume supports data sources such as files, message flows, and converts the received data into an event in the source part. For example, Flume supports the Listen file directory (spooling directory source), and when the Listener directory is new to a file, flume will convert it as a data source through source to event real-time transmission.

Channel: The channel is between source and sink and is used to cache incoming events, and events are removed from the channel when sink successfully sends events to the next-hop channel or end purpose. Currently Flume supports 3 Channel memory channel: Messages in memory, provides high throughput, but does not provide reliability, may lose data, file channel: Persist data, but configuration is cumbersome, need to configure data directory and checkpoint directory Different file channel needs to be configured with a checkpoint directory; JDBC Channel: Built-in Derby database to persist the event, provide high reliability, and replace the file channel with the same persistent characteristics in the future

Sink: The sink is responsible for transferring events to the next hop or final purpose. Sink supports writing data to offline storage such as HDFS, message systems such as Kafka, and so on.

Interceptor: A set of interceptors for source that filter and customize the processing logic of events in a predetermined order, wherever necessary.

Channel Selector allows source to select one or more channel from all channel based on preset rules. For example, depending on the roaming field in the word list, the original order can be placed on a different channel so that the sink can send the data to different target systems.

Channel Selector supports two selectors: Copy replicating: One event is copied to multiple channel, multiplexing Multiplexing:event is routed to a specific channel, that is, non-replication mode.

Big Data Architecture: Flume

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.