Hadoop memory configuration

Source: Internet
Author: User
Tags hortonworks

Hadoop memory configuration

There are two methods to configure the Hadoop memory: manually install the hadoop help script; manually calculate the yarn and mapreduce memory size for configuration. Only the script calculation method is recorded here:

Use the wget command to download the script from hortonworks

Python hdp-configuration-utils.py <options>

Wget http://public-repo-1.hortonworks.com/HDP/tools/

Extract the file, run the hdp-configuration-utils.py script, and run the following command

The following parameters are used:

 The number of cores on each host.
-m MEMORY    
 The amount of memory on each host in GB.
 Thenumber of disks on each host.
"True"if HBase is installed, "False" if not.      

The number of cores can be calculated using the nproc command; the memory size can be calculated using the free-m command; the number of disks can be viewed using the lsblk-s or sudo fdisk-l command.

Python hdp-configuration-utils.py-c 24-m 16-d 8-k False

Calculate the value required for each option and run the command. For example:

The following result is returned:

Using cores = 24 memory = 16 GB disks = 8 hbase = False
Profile: cores = 24 memory = 14336 MB reserved = 2 GB usableMem = 14 GB disks = 8
Num Container = 14
Container Ram = 1024 MB
Used Ram = 14 GB
Unused Ram = 2 GB
Yarn. scheduler. minimum-allocation-mb = 1024
Yarn. scheduler. maximum-allocation-mb = 14336
Yarn. nodemanager. resource. memory-mb = 14336
Mapreduce. map. memory. mb = 1024
Mapreduce. map. java. opts =-Xmx768m
Mapreduce. reduce. memory. mb = 2048
Mapreduce. reduce. java. opts =-Xmx1536m
Yarn. app. mapreduce. am. resource. mb = 1024
Yarn. app. mapreduce. am. command-opts =-Xmx768m
Mapreduce. task. io. sort. mb = 384
Tez. am. resource. memory. mb = 2048
Tez. am. java. opts =-Xmx1536m
Hive. tez. container. size = 1024
Hive. tez. java. opts =-Xmx768m
Hive. auto. convert. join. noconditionaltask. size = 134217000

Finally, you can configure the values of parameters in the mapred-site.xml and yarn-site.xml files by referring to the above results.

Build a Hadoop environment on Ubuntu 13.04

Cluster configuration for Ubuntu 12.10 + Hadoop 1.2.1

Build a Hadoop environment on Ubuntu (standalone mode + pseudo Distribution Mode)

Configuration of Hadoop environment in Ubuntu

Detailed tutorial on creating a Hadoop environment for standalone Edition

Build a Hadoop environment (using virtual machines to build two Ubuntu systems in a Winodws environment)

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.