Total of three nodes, install spark directly after Hadoop is installed, download spark version is not with Hadoop, note node configuration
Hadoop Multi-nodes Installation
Environment:
Hadoop 2.7.2
Ubuntu 14.04 LTS
Ssh-keygen
Java version 1.8.0
Scala 2.11.7
Servers:
master:192.168.199.80 (Hadoopmaster)
hadoopslave:192.168.199.81 (HADOOPSLAVE1)
hadoopslave:192.168.199.82 (HADOOPSLVE2)
Install Java 8:
sudo add-apt-repository Ppa:openjdk-r/ppa
sudo apt-get update
sudo apt-get install OPENJDK-8-JDK
sudo update-alternatives--config java
sudo update-alternatives--config javac
Add Java_home to ~/.BASHRC
$ sudo vi ~/.BASHRC
Add lines at the end of BASHRC
Export JAVA_HOME=/USR/LIB/JAVA-8-OPENJDK-AMD64
Export Path=path: $JAVA _home/bin
Then source it
$ source ~/.BASHRC
Tips:
Don ' t forget it is a hidden file inside your home directory (your would not being the first to does a ls-l and thinking it is n OT there).
Ls-la ~/| More
ADD Hosts
# vi/etc/hosts
Enter the following lines in The/etc/hosts file.
192.168.199.82 Hadoopslave2
Setup SSH in every node
So they can communicate without password (does the same in three nodes)
$ exit
Install Hadoop 2.7.2 (to/opt/hadoop)
Download from Hadoop 2.7.2 (hadoop-2.7.2.tar.gz)
Hadoop-2.7.2-src.tar.gz is the version of your need to build by yourself
$ tar xvf hadoop-2.7.2.tar.gz/opt
$ cd/opt/hadoop
Configuring Hadoop
Core-site.xml
Open the Core-site.xml file and edit it as shown below.
<configuration>
</configuration>
Hdfs-site.xml
Open the Hdfs-site.xml file and edit it as shown below.
<configuration>
</configuration>
Mapred-site.xml
Open the Mapred-site.xml file and edit it as shown below.
<configuration>
</configuration>
hadoop-env.sh
Open the hadoop-env.sh file and edit Java_home
Installing Hadoop on Slave Servers
$ cd/opt
$ scp-r Hadoop hadoopslave1:/opt/
$ scp-r Hadoop hadoopslave2:/opt/
Configuring Hadoop on Master Server
$ cd/opt/hadoop
$ VI etc/hadoop/masters
Hadoopmaster
$ VI etc/hadoop/slaves
Hadoopslave2
Add Hadoop_home, PATH
Export Hadoop_home=/opt/hadoop
Format Name Node on Hadoop Master
$ bin/hadoop Namenode–format
Start Hadoop Services
$ cd/opt/hadoop/sbin
$ start-all.sh
Stop all the Services
$ cd/opt/hadoop/sbin
$ stop-all.sh
Installation Spark 1.6 based on user-provided Hadoop
Step 1 Install Scala
Install Scala 2.11.7 Download from website
$ tar xvf scala-2.11.7.tgz
$ mv scala-2.11.7//usr/opt/scala
Set PATH for Scala in ~/.BASHRC
$ sudo vi ~/.BASHRC
Export Scala_home=/usr/opt/scala
Export PATH = $PATH: $SCALA _home/bin
Download Spark 1.6 from Apache server
Install Spark
$ mv spark-1.6.0-bin-without-hadoop//opt/spark
Set up environment for Spark
$ sudo vi ~/.BASHRC
Export Spark_home=/usr/opt/spark
Export PATH = $PATH: $SPARK _home/bin
ADD Entity to Configuration
$ cd/opt/spark/conf
$ CP spark_env.sh.template spark_env.sh
$ VI spark_env.sh
Hadoop_conf_dir=/opt/hadoop/etc/hadoop
Export spark_dist_classpath=$ (Hadoop CLASSPATH)
ADD Slaves to Configuration
$ cd/opt/spark/conf
$ CP Slaves.template Slaves
$ VI Slaves
Hadoopslave1
Hadoopslave2
Run Spark
$ cd/opt/spark/bin
$ spark-shell
Reprint please attach original address: http://www.cnblogs.com/tonylp/
Hadoop 2.7.2 and spark1.6 Multi-node installation