Hadoop expanded disk operation record

Source: Internet
Author: User

Logging Cloudera expansion Disk

1 ,4 hosts, one 2TB HDD per host

2 , first, simply explain the steps

A , Partition, Mount (Mount directory (name, path) to be consistent)

B , set up the corresponding folder within the mounted partition, and authorize

C , in the CDH hdfs Configuration Interface, configure the new HDFs directory, and then deploy the client configuration, rolling restart

3 , operating Procedures

Partitions (LVM logical volumes ):

# PVCREATE/DEV/SDC

# vgcreage VGROUP03/DEV/SDC

# lvcreate–n cdh01–l +1.8t Vgroup03

# mkfs.ext4/dev/mapper/vgroup03-cdh01

Mount ( Be sure to enter in this order ):

# mkdir/cdh01

#mount/dev/mapper/vgroup01-cdh01/cdh01

Auto mount on Boot

#vim/etc/fstab

Insert/dev/mapper/vgroup03-cdh01/cdh01 ext4 defaults 0 1

Save, Exit

Build folder:

# Mkdir/cdh01/dfs

#mdkir/cdh01/dfs/dn

Authorized:

# chown-r hdfs:hadoop/cdh01/dfs/

----------4 host to do the same, the partition mount directory should be consistent -----------

Login CDH, from the cluster, select HDFS

Select configuration options in the middle section

In the menu bar on the left, select DataNode Default Group

Add a new directory to the "+" sign after the existing directory, and then enter the directory that we have mounted and save.

Select instance Options

Select the instance of DataNode and choose Scroll restart

After the restart, go to the HDFS cluster interface to see, it should have been successful

Datanode Group "Advanced", on the right side of theDatanodeVolume selection policy, select"Free Space", so that the following two properties ("10Gbytes"and the"0.75") will work to balance the use of space in each volume.

650) this.width=650; "src="/e/u261/themes/default/images/spacer.gif "alt=" CDH5.3 expansion disk. docx "class=" Editor-attachment "style=" Background:url ("/e/u261/lang/zh-cn/images/localimage.png") no-repeat center;border:1px Solid #ddd; "/>


This article is from the "Newborn Calf" blog, please be sure to keep this source http://agent.blog.51cto.com/2905948/1672573

Hadoop expanded disk operation record

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.