1, install the JDK, I installed the java1.72, create an Administrator account 3, install the SSH service (if you have skipped this step) 4, SSH no password Authentication login
The above process is a little, you can refer to the installation of Hadoop under Ubuntu article.
5. Download and unzip the hadoop2.6.0 installation package
tar -xzvf hadoop-2.6. 0. tar. Gz/users/hadoop
6, Configuration Hadoop-env.sh,core-site.xml,mapred-site.xml,hdfs-site.xml,yarn-site.xml.
Add Java_home path to the hadoop-env.sh:
Export JAVA_HOME=/LIBRARY/JAVA/JAVAVIRTUALMACHINES/JDK1. 7. 0_75.jdk/contents/home
The Core-site.xml configuration is as follows:
<Configuration> < Property> <name>Fs.default.name</name> <value>hdfs://localhost:9000</value> </ Property> < Property> <name>Hadoop.tmp.dir</name> <value>/users/hadoop/hadoop-2.6.0/tmp</value> </ Property></Configuration>
The Mapred-site.xml configuration is as follows:
<Configuration> < Property> <name>Mapreduce.framework.name</name> <value>Yarn</value> </ Property></Configuration>
The Hdfs-site.xml configuration is as follows:
<Configuration> < Property> <name>Dfs.replication</name> <value>1</value> </ Property> < Property> <name>Dfs.namenode.name.dir</name> <value>/users/hadoop/hadoop-2.6.0/tmp/dfs/name</value> </ Property> < Property> <name>Dfs.datanode.data.dir</name> <value>/users/hadoop/hadoop-2.6.0/tmp/dfs/data</value> </ Property></Configuration>
The Yarn-site.xml configuration is as follows:
<Configuration> < Property> <name>Yarn.nodemanager.aux-services</name> <value>Mapreduce_shuffle</value> </ Property> < Property> <name>Yarn.nodemanager.aux-services.mapreduce.shuffle.class</name> <value>Org.apache.hadoop.mapred.ShuffleHandler</value> </ Property></Configuration>
7. Configure the system environment variables
Open/etc/profile to add the following:
Export JAVA_HOME=/LIBRARY/JAVA/JAVAVIRTUALMACHINES/JDK1. 7. 0_75.jdk/contents/homeexport hadoop_home=/users/hadoop/hadoop-2.6. 0 export PATH= $PATH: $HADOOP _home/bin: $HADOOP _home/sbin
8. Format HDFs File system
HDFs Namenode-format
9. Start the Hadoop service
Start-all. SH
When the installation is correct, there should be the following 5 Java threads:
Namenodedatanodesecondarynamenodenodemanagerresourcemanager
Enter localhost:50070 in the browser to view the Hadoop cluster overview
10. Unzip the HBase installation package
tar -xzvf hbase-1.0. 1.1-bin. tar. Gz/users/hadoop
11, configuration hbase-env.sh, Hbase-site.xml.
The hbase-env.sh configuration is as follows:
Export JAVA_HOME=/LIBRARY/JAVA/JAVAVIRTUALMACHINES/JDK1. 7. 0_75.jdk/contents/homeexport Hbase_manages_zk=true
The Hbase-site.xml configuration is as follows:
<Configuration> < Property> <name>Hbase.rootdir</name> <value>Hdfs://localhost:9000/hbase</value> </ Property> < Property> <name>Dfs.replication</name> <value>1</value> </ Property> < Property> <name>hbase.cluster.distributed</name> <value>True</value> </ Property> < Property> <name>Hbase.tmp.dir</name> <value>/users/hadoop/hbase-1.0.1.1/tmp</value> </ Property> < Property> <name>Zookeeper.session.timeout</name> <value>120000</value> </ Property></Configuration>
12. Start HBase
Start Hadoop first, and then start HBase
Start-all. SH start-hbase. SH
If the installation is correct, the following 3 HBase-related threads should be available:
Hmasterhregionserverhquorumpeer
13. Existing problems
1) The installation package provided by the Hadoop website is 32-bit, and the installation on a 64-bit machine will appear "
WARN util. nativecodeloader:unable to load Native-hadoop library for your platform ... using Builtin-java classes where applicable " Hint, the effect is not found for the time being used. If you want to remove this hint, you can download the SRC source file on the official website and compile it manually on a 64-bit machine. 2) HBase threads will be automatically interrupted, the current attempt to find a variety of methods on the web, still not resolved, temporarily only by restarting HBase method to deal with the problem.
Installing pseudo-distributed hadoop2.6.0 and hbase1.0.1.1 under Mac