Install hadoop on Mac) install hadoop on Mac

Source: Internet
Author: User
ArticleDirectory
    • Obtain Java
    • Obtain hadoop
    • Set Environment Variables
    • Configure hadoop-env.sh
    • Configure core-site.xml
    • Configure hdfs-site.xml
    • Configure mapred-site.xml
    • Install HDFS
    • Start hadoop
    • Simple debugging
    • Obtain Java
    • Obtain hadoop
    • Set Environment Variables
    • Configure hadoop-env.sh
    • Configure core-site.xml
    • Configure hdfs-site.xml
    • Configure mapred-site.xml
    • Install HDFS
    • Start hadoop
    • Simple debugging
Install hadoop on Mac

For a user I have never met * nix before, it is still a waste of effort to use command lines to do a series of things. Close up this record for backup.

 

Obtain Java

The operating system running on my Mac is OS x 10.7 lion. I have installed Java before. You can use the Java-version command in the utility> terminal to confirm the Java version. If Java is not installed, you can download it at: http://support.apple.com/kb/dl1421.

 

Obtain hadoop

The specific address is Baidu. I downloaded the stable version 1.0.4.

Decompress the package after the download. The directory I put here is/users/Billy/hadoop.

 

Set Environment Variables

Before hadoop is started, three files need to be configured.

However, before that, we need to set several environment variables similar to Windows to facilitate command-line.

Export hadoop_home =/users/Billy/hadoop

Export Path = $ path: $ hadoop_home/bin

 

Configure hadoop-env.sh

Under the hadoop-> conf directory, locate the hadoop-env.sh and open the Edit to make the following settings:

Export java_home =/library/Java/Home (remove comments)

Export hadoop_heapsize = 2000 (remove comments)

Export hadoop_opts = "-djava. Security. krb5.realm = ox. ac. uk-djava. Security. krb5.kdc = kdc0.ox. ac. uk: kdc1.ox. ac. uk" (remove comments)

Note that it is best to configure the third configuration on OS X. Otherwise, an error will be reported."Unable to load realm info from scdynamicstore ".

 

Configure core-site.xml < Configuration >
< Property >
< Name > Hadoop. tmp. dir </ Name >
< Value > /Users/Billy/hadoop/tmp/hadoop-$ {user. name} </ Value >
< Description > A base for other temporary directories. </ Description >
</ Property >
< Property >
< Name > FS. Default. Name </ Name >
< Value > HDFS: // localhost: 8020 </ Value >
</ Property >
</ Configuration > Configure hdfs-site.xml

<Configuration>
<Property>
<Name>DFS. Replication</Name>
<Value>1</Value>
</Property>

</Configuration> 

Configure mapred-site.xml

 <Configuration>

< Property >
< Name > Mapred. Job. Tracker </ Name >
< Value > Localhost: 8021 </ Value >
</ Property >

< Property >
< Name > Mapred. tasktracker. Map. Tasks. Maximum </ Name >
< Value > 2 </ Value >
</ Property >

< Property >
< Name > Mapred. tasktracker. Reduce. Tasks. Maximum </ Name >
< Value > 2 </ Value >
</ Property >
</ Configuration >

 

Install HDFS

After the above configuration, you can install HDFS.

// Note that the namenode must be connected together. It is a word or an error is returned.

// After restarting the computer, start hadoop from this step. Otherwise, hadoop cannot run.ProgramBecause there is no HDFS file system. This command is used to format the file system.

$ Hadoop_home/bin/hadoop namenode-format

If it succeeds, the output is similar to the following:

 Billymatomacbook-air: hadoop Billy $ hadoop_home/bin/hadoop namenode-format

Warning: $ hadoop_home is deprecated.

 

12/12/02 17:11:12 info namenode. namenode: startup_msg:

/*************************************** *********************

Startup_msg: Starting namenode

Startup_msg: host = BillymatoMacBook-Air.local/192.168.1.102

Startup_msg: ARGs = [-format]

Startup_msg: version = 1.0.4

Startup_msg: Build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0-r 1393290; compiled by 'hortonfo' on Wed Oct 3 05:13:58 UTC 2012

**************************************** ********************/

12/12/02 17:11:12 info util. gset: VM type = 64-bit

12/12/02 17:11:12 info util. gset: 2% max memory = 39.9175 MB

12/12/02 17:11:12 info util. gset: capacity = 2 ^ 22 = 4194304 entries

12/12/02 17:11:12 info util. gset: Recommended = 4194304, actual = 4194304

12/12/02 17:11:12 info namenode. fsnamesystem: fsowner = Billy

12/12/02 17:11:12 info namenode. fsnamesystem: supergroup = supergroup

12/12/02 17:11:12 info namenode. fsnamesystem: ispermissionenabled = true

12/12/02 17:11:12 info namenode. fsnamesystem: DFS. Block. invalidate. Limit = 100

12/12/02 17:11:12 info namenode. fsnamesystem: isaccesstokenenabled = false accesskeyupdateinterval = 0 min (s), accesstokenlifetime = 0 min (s)

12/12/02 17:11:13 info namenode. namenode: caching file names occuring more than 10 times

12/12/02 17:11:13 info common. Storage: Image File of size 111 saved in 0 seconds.

12/12/02 17:11:13 info common. Storage: storage directory/users/Billy/hadoop/tmp/hadoop-Billy/dfs/name has been successfully formatted.

12/12/02 17:11:13 info namenode. namenode: shutdown_msg:

/*************************************** *********************

Shutdown_msg: Shutting Down namenode at BillymatoMacBook-Air.local/192.168.1.102

**************************************** ********************/

 

Start hadoop

It's easy, just a command.

$ Hadoop_home/bin/start-all.sh

If the password is successful, you are generally asked to enter the password three times.

 

Simple debugging

If you want to see if it has been successfully started, you can use the following example:

$ Hadoop jar $ hadoop_home/hadoop-example-1.0.4.jar PI 10 100

If it succeeds, a similar result will be displayed:

Billymatomacbook-air: hadoop Billy $ hadoop jar $ hadoop_home/hadoop-examples-1.0.4.jar PI 10 100

Warning: $ hadoop_home is deprecated.

 

Number of maps = 10

Samples per map = 100

Wrote input for map #0

Wrote input for map #1

Wrote input for map #2

Wrote input for map #3

Wrote input for map #4

Wrote input for map #5

Wrote input for map #6

Wrote input for map #7

Wrote input for map #8

Wrote input for map #9 

 

By now, hadoop on a single node is installed on Mac OS X. Enter your hadoop world!

For a user I have never met * nix before, it is still a waste of effort to use command lines to do a series of things. Close up this record for backup.

 

Obtain Java

The operating system running on my Mac is OS x 10.7 lion. I have installed Java before. You can use the Java-version command in the utility> terminal to confirm the Java version. If Java is not installed, you can download it at: http://support.apple.com/kb/dl1421.

 

Obtain hadoop

The specific address is Baidu. I downloaded the stable version 1.0.4.

Decompress the package after the download. The directory I put here is/users/Billy/hadoop.

 

Set Environment Variables

Before hadoop is started, three files need to be configured.

However, before that, we need to set several environment variables similar to Windows to facilitate command-line.

Export hadoop_home =/users/Billy/hadoop

Export Path = $ path: $ hadoop_home/bin

 

Configure hadoop-env.sh

Under the hadoop-> conf directory, locate the hadoop-env.sh and open the Edit to make the following settings:

Export java_home =/library/Java/Home (remove comments)

Export hadoop_heapsize = 2000 (remove comments)

Export hadoop_opts = "-djava. Security. krb5.realm = ox. ac. uk-djava. Security. krb5.kdc = kdc0.ox. ac. uk: kdc1.ox. ac. uk" (remove comments)

Note that it is best to configure the third configuration on OS X. Otherwise, an error will be reported."Unable to load realm info from scdynamicstore ".

 

Configure core-site.xml < Configuration >
< Property >
< Name > Hadoop. tmp. dir </ Name >
< Value > /Users/Billy/hadoop/tmp/hadoop-$ {user. name} </ Value >
< Description > A base for other temporary directories. </ Description >
</ Property >
< Property >
< Name > FS. Default. Name </ Name >
< Value > HDFS: // localhost: 8020 </ Value >
</ Property >
</ Configuration > Configure hdfs-site.xml

<Configuration>
<Property>
<Name>DFS. Replication</Name>
<Value>1</Value>
</Property>

</Configuration> 

Configure mapred-site.xml

 <Configuration>

< Property >
< Name > Mapred. Job. Tracker </ Name >
< Value > Localhost: 8021 </ Value >
</ Property >

< Property >
< Name > Mapred. tasktracker. Map. Tasks. Maximum </ Name >
< Value > 2 </ Value >
</ Property >

< Property >
< Name > Mapred. tasktracker. Reduce. Tasks. Maximum </ Name >
< Value > 2 </ Value >
</ Property >
</ Configuration >

 

Install HDFS

After the above configuration, you can install HDFS.

// Note that the namenode must be connected together. It is a word or an error is returned.

// After restarting the computer, start hadoop from this step. Otherwise, you cannot run the program because there is no HDFS file system. This command is used to format the file system.

$ Hadoop_home/bin/hadoop namenode-format

If it succeeds, the output is similar to the following:

 Billymatomacbook-air: hadoop Billy $ hadoop_home/bin/hadoop namenode-format

Warning: $ hadoop_home is deprecated.

 

12/12/02 17:11:12 info namenode. namenode: startup_msg:

/*************************************** *********************

Startup_msg: Starting namenode

Startup_msg: host = BillymatoMacBook-Air.local/192.168.1.102

Startup_msg: ARGs = [-format]

Startup_msg: version = 1.0.4

Startup_msg: Build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0-r 1393290; compiled by 'hortonfo' on Wed Oct 3 05:13:58 UTC 2012

**************************************** ********************/

12/12/02 17:11:12 info util. gset: VM type = 64-bit

12/12/02 17:11:12 info util. gset: 2% max memory = 39.9175 MB

12/12/02 17:11:12 info util. gset: capacity = 2 ^ 22 = 4194304 entries

12/12/02 17:11:12 info util. gset: Recommended = 4194304, actual = 4194304

12/12/02 17:11:12 info namenode. fsnamesystem: fsowner = Billy

12/12/02 17:11:12 info namenode. fsnamesystem: supergroup = supergroup

12/12/02 17:11:12 info namenode. fsnamesystem: ispermissionenabled = true

12/12/02 17:11:12 info namenode. fsnamesystem: DFS. Block. invalidate. Limit = 100

12/12/02 17:11:12 info namenode. fsnamesystem: isaccesstokenenabled = false accesskeyupdateinterval = 0 min (s), accesstokenlifetime = 0 min (s)

12/12/02 17:11:13 info namenode. namenode: caching file names occuring more than 10 times

12/12/02 17:11:13 info common. Storage: Image File of size 111 saved in 0 seconds.

12/12/02 17:11:13 info common. Storage: storage directory/users/Billy/hadoop/tmp/hadoop-Billy/dfs/name has been successfully formatted.

12/12/02 17:11:13 info namenode. namenode: shutdown_msg:

/*************************************** *********************

Shutdown_msg: Shutting Down namenode at BillymatoMacBook-Air.local/192.168.1.102

**************************************** ********************/

 

Start hadoop

It's easy, just a command.

$ Hadoop_home/bin/start-all.sh

If the password is successful, you are generally asked to enter the password three times.

 

Simple debugging

If you want to see if it has been successfully started, you can use the following example:

$ Hadoop jar $ hadoop_home/hadoop-example-1.0.4.jar PI 10 100

If it succeeds, a similar result will be displayed:

Billymatomacbook-air: hadoop Billy $ hadoop jar $ hadoop_home/hadoop-examples-1.0.4.jar PI 10 100

Warning: $ hadoop_home is deprecated.

 

Number of maps = 10

Samples per map = 100

Wrote input for map #0

Wrote input for map #1

Wrote input for map #2

Wrote input for map #3

Wrote input for map #4

Wrote input for map #5

Wrote input for map #6

Wrote input for map #7

Wrote input for map #8

Wrote input for map #9 

 

By now, hadoop on a single node is installed on Mac OS X. Enter your hadoop world!

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.