Introduction of environment operating system and software version1. Environment operating system for CentOS Linux release 7.2.1511 (Core)Available Cat/etc/redhat-release queries2. Software versionThe Kafka version is: 0.10.0.0Second, the basic preparation of softwareBecause the Kafka cluster needs to rely on the zookeeper cluster for co-management, the ZK cluster
Kafka is only a small bond. It is often used for sending and transferring data. In the official case of Kafka, there is no relevant implementation version of PHP in fact. Now the online circulating Kafka of the relevant PHP library, are some of the programming enthusiasts write their own class library, so there will ce
1Install zookeeper
Reference: http://www.cnblogs.com/hunttown/p/5452138.html
2Download:Https://www.apache.org/dyn/closer.cgi? Path =/Kafka/0.9.0.1/kafka_2.10-0.9.0.1.tgz
Kafka_2.10-0.9.0.1.tgz #2.10 refers to the scala version, 0.9.0.1 batch is the Kafka version.
3, Installation and configuration
Unzip: Tar xzf kafka_2
Kafka-Storm integrated deploymentPreface
The main component of Distributed Real-time computing is Apache Storm Based on stream computing. The data source of real-time computing comes from Kafka in the basic data input component, how to pass the message data of Kafka to Storm is discussed in this article.0. Prepare materials
Normal and stable
In the previous section (Point this transfer), we completed the Kafka cluster, in this section we will introduce the new API in version 0.9, and the test of Kafka cluster high availability1. Use Kafka's producer API to complete the push of messages1) Kafka 0.9.0.1 Java Client dependency:2) Write a Kafkautil tool class
to server
2. single-node Kafka,. start the zookeeper cluster first to execute bin/kafka-server-start.sh config/server. properties will report: Unrecognized VM option 'usecompressedoops' error: cocould not create the Java Virtual Machine. error: a fatal exception has occurred. program will exit. the reason is that the JDK version does not match. You need to modif
Kafka is developed in the Scala language and runs on the JVM, so you'll need to install the JDK before installing Kafka.
1. JDK installation configuration 1) do not have spaces in the Windows installation JDK directory name. Set Java_home and CLASSPATH example: Java_home c:\Java\jkd1.8 CLASSPATH.; %java_home%\lib\dt.jar;%java_home%\lib\tools.jar Verification: java-
installation test1. Installation Jre/jdk, (Kafka run to rely on the JDK, the installation of the JDK is omitted here, it is necessary to note that the JDK version must support the download of the Kafka version, otherwise will be error, here I installed jdk1.7)2,: http://kafka.apache.org/downloads.html (i downloaded th
file index file is as follows:
00000000000000000000. LogFile name. The maximum file string size is 2 ^ 64bit, which corresponds to the index.
Figure 5
Parameter description:
4 byte CRC32: Use the CRC32 algorithm to calculate the buffer except the 4byte CRC32.
1 byte "magic": indicates the Protocol version number of the data file.
1 byte "attributes": identifies an independent version, the compres
Introduction
Cluster installation:
I. preparations:
1. Version introduction:
Currently we are using a version of kafka_2.9.2-0.8.1 (scala-2.9.2 is officially recommended for Kafka, in addition to 2.8.2 and 2.10.2 available)
2. Environment preparation:
Install JDK 6. The current version is 1.6 and java_home is con
KafkaSource Compilation reading environment constructionDevelopment Environment: Oracle Java 1.7.0_25 + idea + Scala 2.10.5 +gradle 2.1 + Kafka 0.9.0.1First,GradleInstallation Configuration Kafka code from 0.8.x Gradle to compile and build, you first need to install gradle gradle integrates and absorbs the maven > The main advantages are also overcome maven some limitations of itself -- You can access
configured, for example:
listeners=plaintext://192.168.180.128:9092. And make sure that port 9092 of the server can access3.zookeeper.connect Kafka the address of the zookeeper to be connected, the address that needs to be configured to zookeeper, because this time uses Kafka high version comes with the zookeeper, uses the default configuration tozook
Using Kafka latest Version 0.9Kafka Configuration
1. InstallationFirst need to install Java, it is recommended to install JAVA8, otherwise there will be some inexplicable errorsKafka_2.11-0.9.0.0.tgzTar-xzf kafka_2.11-0.9.0.0.tgzFor convenience, change the directory nameMV kafka_2.11-0.9.0.0.tgz Kafka2. Configure Kafka service-side propertiesInstalled is a
"," Method ":" Main "," File ":" Demo.java "," line ": 23}}We see that this format can be easily parsed in whatever language.Kafka integration of the log frameworkWe only use log4j 1.x and log4j 2.x for example.log4j 1.x integration with KafkaFirst of all, the contents of Pom.xml are as follows:Note that the Kafka version number we are using here is 0.8.2.1, but the corresponding 0.9.0.1 is available and 0
Kafka Quick Start, kafkaStep 1: Download the code
Step 2: Start the server
Step 3: Create a topic
Step 4: Send some messages
Step 5: Start a consumer
Step 6: Setting up a multi-broker cluster
The configurations are as follows:
The "leader" node is responsible for all read and write operations on specified partitions.
"Replicas" copies the node list of this partition log, whether or not the leader is included
The set of "isr
project. I recommend the second, because the Scala version and the Kafka version that are downloaded through Kafka compilation are matched (but sometimes may conflict with the environment that Eclipse's plugin needs, so it's best to install the first one, just in case), and generally we use a Java project to write, So
data to one or more Kafka topic2. The Customer API allows one app to subscribe to one or more topic and process the data streams that are produced to them3. Stream API allows applications like a stream processor to consume data streams from one or more topic inputs and then produce output data streams to one or more topic, efficiently converting between inputs and outputs4. The Connector API allows you to create and run reusable connections
Pre-Preparation
Elk Official Website: https://www.elastic.co/, package download and perfect documentation.
Zookeeper Official website: https://zookeeper.apache.org/
Kafka official website: http://kafka.apache.org/documentation.html, package download and perfect documentation.
Flume Official website: https://flume.apache.org/
Heka Official website: https://hekad.readthedocs.io/en/v0.10.0/
The system is a centos6.6,64 bit machine.
is a brief introduction to the Kafka cluster construction process:
Prep environment: At least 3 Linux servers (the author is a 5 redhat version of cloud server)
First step: Install Jdk/jre
Step Two: Install Zookeeper (Kafka comes with zookeeper service, but it is recommended that you build a zookeeper cluster separately, which can be shared with other applicatio
to the network layer of the various anomalies.
In a distributed system, the protocol is custom-made by the server, and the client can ensure that the client's request is received and processed gracefully as long as it follows the protocol to send the request. So in fact the implementation of the client can be implemented by different languages themselves, the official Wiki lists the majority of languages currently supported. Because of the different languages have their own network layer progra
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.