"Excavator" Upgrade road (--hbase) harvest in cluster installation

Source: Internet
Author: User

rough calculation, from the Friday to this Tuesday, tossing Hadoop has been three days. This three days I was very scared, as an apprentice, although the eldest brother did not say anything, but I would like to immediately complete these basic deployment work, feel dragged on for a long time. A simple summary of the first day toss Hadoop standalone and pseudo-distributed installation, the next day in the toss hive installation, to fail, the third day toss HBase cluster installation, on the master node installed successfully.

Also come to specific talk about today's harvest, today's reference is mainly so two 1. distributed real-time log system (iv) Environment construction CentOS 1.0.1 distributed cluster Setup (I FQ see, I don't know if I can open it),

2. detailed tutorial on configuring HBase fully distributed environment two the mutual proof, I have a lot of vague concepts in my mind to figure out. I don't write the detailed installation process, I write some of the problems I encountered.

The first step, because I use the company's server set up four virtual machines, so enter the master node after entering#su-hadoop Switch to Hadoop users, it is not recommended to use root user, may be in security considerations, anyway I do not understand. Here I have two questions, the first one is in-and Hadoop has a space between, can not be written as #su-hadoop, this Shi cannot find. The second mistake is to use the #vim/etc/profile , always found that you can not change, forced modification is not (wq! If you switch to #sudo vim/etc/profile , you will be prompted not to Sudoer user group. The solution is to #visudo into a vi editing interface, input : Enter command mode, enter set nuShow line numbers, find 99 rows Add Hadoop all= (All) allI also forget the specific what, just like root, remember not to enter Visudo under the Hadoop user group, cut to the root user. After the change is finished save exit.

The second step, and then in the implementation of #./sbin/start-dfs.sh Note is in your Hadoop directory, but the rationale for the configuration of environment variables, it should be possible elsewhere, this let me think about.

The third step, and a part of the unfinished, I completed the HBase and hive cluster installation to continue.

"Excavator" Upgrade road (--hbase) harvest in cluster installation

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.