Storm Cluster Build-up

Source: Internet
Author: User

Cluster structure

The storm cluster surface resembles a Hadoop cluster. But on Hadoop you run "MapReduce Jobs" and you Run "topologies" on storm.   "Jobs" and "topologies" are very different, a key difference is that a mapreduce job will eventually end, and a topology will always handle the message (or until you kill it). The storm cluster has two types of nodes: the control (master) node and the worker (worker) node. The control node runs a daemon called "Nimbus", which resembles the "Jobtracker" of Haddop.   Nimbus is responsible for distributing code within the cluster, assigning tasks to workers, and monitoring failures. Each worker node runs a background program called "Supervisor". Supervisor listens to the work assigned to its machine and decides to start or stop worker processes based on what Nimbus assigns to it. Each worker process executes a subset of topology (that is, a sub-topological structure); A running topology consists of many worker processes that span multiple machines. A zookeeper cluster is responsible for all coordination between Nimbus and multiple supervisor (a full topology may be divided into multiple sub-topologies and completed by multiple supervisor). In addition, both the Nimbus daemon and the Supervisor daemon are fast-failing (fail-fast) and stateless, and all States are maintained on zookeeper or local disks. This means that you can kill-9 kill the Nimbus process and the supervisor process, and then reboot, and they will resume their status and continue to work as if nothing had happened. This design makes storm extremely stable.  In this design, master does not communicate directly with the worker, but instead uses a mediation zookeeper, which separates master and worker dependencies and stores state information in the zookeeper cluster to quickly reply to any failed party. Cluster build process Install zookeeper cluster decompression apache-storm-0.9.1-incubating.tar.gz modify file Conf/storm.yaml on Nimbus node Execute "bin/storm Nimbus >/ Dev/null 2>&1 & "Start Nimbus daemon and put it in the background execution on Supervisor node" storm/bin/storm Supervisor >/dev/null 2>&1 & "Start the Supervisor daemon and put it in the background, execute it in the Nimbus node," Bin/storm UI >/dev/null 2>&1 & "Start the UI daemon, and put it in the background to execute, After starting, the Http://{nimbus host}:8080 can be used to observe the usage of worker resources and the running status of topologies in the cluster. The storm configuration file uses two spaces as a primary indent as a contract for YAML, and cannot use tabs (tab) instead of a string with a ":" As a terminator, representing a key name, followed by a key value. There must be at least one space between the ":" and the key value. The "-" in front of the elements of the list is necessary and must follow at least one space. You can also use

-[Value1, Value2, Value3] represents the list

Submitting jobs to a cluster

Storm Jar *.jar Xxxxmainclass

Storm Cluster Build-up

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.