Druid Building a clustered environment

Source: Internet
Author: User

Download Druid

Http://static.druid.io/artifacts/releases/druid-services-0.6.145-bin.tar.gz

Extract

TAR-ZXVF druid-services-*-bin.tar.gz
CD druid-services-*

External dependencies

1.A "Deep" storage, as a backup database

2.mysql

Set up MySQL

Mysql-u Root

GRANT all on druid.* to ' druid ' @ ' localhost ' identified by ' diurd ';
CREATE Database druid;

3.Apache Zookeeper

Curl Http://apache.osuosl.org/zookeeper/zookeeper-3.4.5/zookeeper-3.4.5.tar.gz-o zookeeper-3.4.5.tar.gz
Tar xzf zookeeper-3.4.5.tar.gz

CD zookeeper-3.4.5

CP Conf/zoo_sample.cfg Conf/zoo.cfg

./bin/zkserver.sh Start

Cd..

Cluster

Start some nodes and download the data. First of all, your guarantee there are several nodes under the \config folder:

After the configuration of MySQL and zookeeper, it takes 5 steps, and the Druid Cluster service is built successfully. Start 4 nodes.

1: Start the Coordinator node

The coordinator node is responsible for Druid load balancing

Configuration file

Config/coordinator/runtime.properties

Druid.host=localhost
Druid.service=coordinator
druid.port=8082

Druid.zk.service.host=localhost

Druid.db.connector.connecturi=jdbc\:mysql\://localhost\:3306/druid
Druid.db.connector.user=druid
Druid.db.connector.password=diurd

druid.coordinator.startdelay=pt70s

Start command

Java-xmx256m-duser.timezone=utc-dfile.encoding=utf-8-classpath lib/*:config/coordinator io.druid.cli.Main Server Coordinator &

The following 3 tables are created in the Durid database after coordinator is started:

2: Start the historical node

The HISTORIACL node is the core component of the cluster and is responsible for loading the data so that they can be used for querying

Configuration file

Druid.host=localhost
Druid.service=historical
druid.port=8081

Druid.zk.service.host=localhost

druid.extensions.coordinates=["io.druid.extensions:druid-s3-extensions:0.6.143"]

# Dummy Read Only AWS account (used to download exampledata)
Druid.s3.secretkey=qyyfvz7llsirg6qcrql1eeug7bufpak6t6engr1b
Druid.s3.accesskey=akiaimkecruykdqgr6yq

druid.server.maxsize=10000000000

# change these to make Druidfaster
druid.processing.buffer.sizebytes=100000000
Druid.processing.numthreads=1

druid.segmentcache.locations=[{"path": "/tmp/druid/indexcache", "MaxSize" \: 10000000000}]

Start command

Java-xmx256m-duser.timezone=utc-dfile.encoding=utf-8-classpath lib/*:config/historical io.druid.cli.Main Server Historical &

3: Start broker node

The broker is responsible for querying data from the HISTORIACL node and the Realtime node.

Configuration file

Druid.host=localhost
Druid.service=broker
druid.port=8080

Druid.zk.service.host=localhost

Start command

Java-xmx256m-duser.timezone=utc-dfile.encoding=utf-8-classpath lib/*:config/broker Io.druid.cli.Main Server Broker &

4:load data

Because it is a test, you need to manually insert the data, which is automatically saved to the database in a production cluster environment.

Use Druid;
INSERT intodruid_segments (ID, DataSource, created_date, Start, end,partitioned, version, used, payload) VALUES (' wikiped ia_2013-08-01t00:00:00.000z_2013-08-02t00:00:00.000z_2013-08-08t21:22:48.989z ', ' Wikipedia ', ' 2013-08-08T21 : 26:23.799z ', ' 2013-08-01t00:00:00.000z ', ' 2013-08-02t00:00:00.000z ', ' 0 ', ' 2013-08-08t21:22:48.989z ', ' 1 ', ' {\ ' Datasource\ ": \" Wikipedia\ ", \" interval\ ": \" 2013-08-01t00:00:00.000z/2013-08-02t00:00:00.000z\ ", \" version\ ": \" 2013-08-08t21:22:48.989z\ ", \" loadspec\ ": {\" type\ ": \" s3_zip\ ", \" bucket\ ": \" static.druid.io\ ", \" key\ ": \" data/ Segments/wikipedia/20130801t000000.000z_20130802t000000.000z/2013-08-08t21_22_48.989z/0/index.zip\ "},\" Dimensions\ ": \" Dma_code,continent_code,geo,area_code,robot,country_name,network,city,namespace,anonymous, Unpatrolled,page,postal_code,language,newpage,user,region_lookup\ ", \" metrics\ ": \" count,delta,variation,added, Deleted\ ", \" shardspec\ ": {\" type\ ": \" none\ "},\" binaryversion\ ": 9,\" size\ ": 24664730,\" identifier\ ": \" Wikipedia_ 2013-08-01t00:00:00.000z_2013-08-02t00:00:00.000z_2013-08-08t21:22:48.989z\ "}");

5: Start the Realtime node

Configuration file

Druid.host=localhost
Druid.service=realtime
druid.port=8083

Druid.zk.service.host=localhost

druid.extensions.coordinates=["io.druid.extensions:druid-examples:0.6.143", "Io.druid.extensions: druid-kafka-seven:0.6.143 "]

# change this config-to-db to hand off to the rest of the Druidcluster
Druid.publish.type=noop

# These configs is only required for real hand off
# Druid.db.connector.connecturi=jdbc\:mysql\://localhost\:3306/druid
# Druid.db.connector.user=druid
# Druid.db.connector.password=diurd

druid.processing.buffer.sizebytes=100000000
Druid.processing.numthreads=1

druid.monitoring.monitors=["Io.druid.segment.realtime.RealtimeMetricsMonitor"]

Start command

Java-xmx256m-duser.timezone=utc-dfile.encoding=utf-8-ddruid.realtime.specfile=examples/wikipedia/wikipedia_ Realtime.spec-classpathlib/*:config/realtime Io.druid.cli.Main Serverrealtime &

Finally, you can query the data.

Druid Building a clustered environment

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.