Hadoop-hbase-spark Single version installation

Source: Internet
Author: User

0 Open Extranet Ports required

50070,8088,60010, 7077

1 Setting up SSH password-free login

Ssh-keygen-t Dsa-p "-F ~/.SSH/ID_DSA

Cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

chmod 0600 ~/.ssh/authorized_keys

2 Unpacking the installation package

Tar-zxvf/usr/jxx/scala-2.10.4.tgz-c/usr/local/

Tar-zxvf/usr/jxx/spark-1.5.2-bin-hadoop2.6.tgz-c/usr/local/

Tar-zxvf/usr/jxx/hbase-1.0.3-bin.tar.gz-c/usr/local/

Tar-zxvf/usr/jxx/hadoop-2.6.0-x64.tar.gz-c/usr/local/

3 Setting environment variables

Vim/etc/profile

Add to

Export JAVA_HOME=/USR/LOCAL/JAVA/JDK1.7.0_79#JDK If you already have one, you don't have to add

Export path= $PATH: $JAVA _home/bin: $JAVA _home/jre/bin

Export classpath=.: $JAVA _home/lib: $JAVA _home/jre/lib: $CLASSPATH

Export scala_home=/usr/local/scala-2.10.4

Export path= $PATH: $SCALA _home/bin

Export hadoop_home=/usr/local/hadoop-2.6.0

Export path= $PATH: $HADOOP _home/bin

Export hbase_home=/usr/local/hbase-1.0.3

Export path= $PATH: $HBASE _home/bin

Export spark_home=/usr/local/spark-1.5.2-bin-hadoop2.6

Export path= $PATH: $SPARK _home/bin

And then execute

Source/etc/profile

or restart the machine.

4 Modifying the configuration

vim/usr/local/hadoop-2.6.0/etc/hadoop/hadoop-env.sh

Modify Export java_home=/usr/local/java/jdk1.7.0_79

Vim/usr/local/hadoop-2.6.0/etc/hadoop/core-site.xml

Core-site.xml

<property>

<name>fs.defaultFS</name>

<value>hdfs://B250:9000</value>

</property>

Vim/usr/local/hadoop-2.6.0/etc/hadoop/hdfs-site.xml

Hdfs-site.xml

<property>

<name>dfs.replication</name>

<value>1</value>

</property>

<property>

<name>dfs.namenode.name.dir</name>

<value>file:///disk/dfs/name</value>

</property>

<property>

<name>dfs.datanode.data.dir</name>

<value>file:///disk/dfs/data</value>

</property>

Vim/usr/local/hadoop-2.6.0/etc/hadoop/yarn-site.xml

Yarn-site.xml

<property>

<name>yarn.nodemanager.aux-services</name>

<value>mapreduce_shuffle</value>

</property>

mv/usr/local/hadoop-2.6.0/etc/hadoop/mapred-site.xml.template/usr/local/hadoop-2.6.0/etc/hadoop/ Mapred-site.xml

Vim/usr/local/hadoop-2.6.0/etc/hadoop/mapred-site.xml

Mapred-site.xml

<property>

<name>mapreduce.framework.name</name>

<value>yarn</value>

</property>

Vim/usr/local/hbase-1.0.3/conf/hbase-site.xml

Hbase-site.xml

<property>

<name>hbase.rootdir</name>

<!--corresponds to micmiu.com configuration in HDFs--

<value>hdfs://localhost:9000/hbase</value>

</property>

<property>

<name>hbase.cluster.distributed</name>

<value>true</value>

</property>

vim/usr/local/hbase-1.0.3/conf/hbase-env.sh

Export java_home=/usr/local/java/jdk1.7.0_79

Export path= $PATH: $JAVA _home/bin: $JAVA _home/jre/bin

Export classpath=.: $JAVA _home/lib: $JAVA _home/jre/lib: $CLASSPATH

Export scala_home=/usr/local/scala-2.10.4

Export path= $PATH: $SCALA _home/bin

Export hadoop_home=/usr/local/hadoop-2.6.0

Export path= $PATH: $HADOOP _home/bin

Export hbase_home=/usr/local/hbase-1.0.3

Export path= $PATH: $HBASE _home/bin

Export Hbase_manages_zk=true

mv/usr/local/spark-1.5.2-bin-hadoop2.6/conf/spark-env.sh.template/usr/local/spark-1.5.2-bin-hadoop2.6/conf/ spark-env.sh

mv/usr/local/spark-1.5.2-bin-hadoop2.6/conf/spark-defaults.conf.template/usr/local/spark-1.5.2-bin-hadoop2.6/ Conf/spark-defaults.conf

Mkdir/disk/spark

vim/usr/local/spark-1.5.2-bin-hadoop2.6/conf/spark-env.sh

Export java_home=/usr/local/java/jdk1.7.0_79

Export scala_home=/usr/local/scala-2.10.4

Export hadoop_home=/usr/local/hadoop-2.6.0

Export hbase_home=/usr/local/hbase-1.0.3

Export spark_home=/usr/local/spark-1.5.2-bin-hadoop2.6

Export hadoop_conf_dir= $HADOOP _home/etc/hadoop

Export Spark_local_dirs=/disk/spark

Export spark_daemon_memory=256m

Export spark_history_opts= "$SPARK _history_opts-dspark.history.fs.logdirectory=/tmp/spark-dspark.history.ui.port =18082 "

Export Standalone_spark_master_host=localhost

Vim/usr/local/spark-1.5.2-bin-hadoop2.6/conf/spark-defaults.conf

spark.master=spark://localhost:7077

Spark.eventlog.dir=/disk/spark/applicationhistory

Spark.eventlog.enabled=true

spark.yarn.historyserver.address=localhost:18082

5 Initializing the Environment

Formatting Namenode

HDFs Namenode-format

6 Starting the Service

Start HDFs

sh/usr/local/hadoop-2.6.0/sbin/start-dfs.sh

Start HBase

sh/usr/local/hbase-1.0.3/bin/start-hbase.sh

Start Spark

sh/usr/local/spark-1.5.2-bin-hadoop2.6/sbin/start-all.sh

7 Set boot up

Su-root-c "Sh/usr/local/hadoop-2.6.0/sbin/start-dfs.sh"

Su-root-c "Sh/usr/local/hbase-1.0.3/bin/start-hbase.sh"

Su-root-c "Sh/usr/local/spark-1.5.2-bin-hadoop2.6/sbin/start-all.sh"

Hadoop-hbase-spark Single version installation

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.