HDFS Environment Setup
- Download the latest version of the Hadoop-compiled tar package: http://hadoop.apache.org/releases.html
- Confirm the roles of HDFs Namenode and Datanode and write the Namenode and datanode IP machine name correspondence into the/etc/hosts file for each machine.
Confirm that Namenode can connect to datanode node via SSH without a password.
Execute the following command (1) ssh-keygen-t RSA" "//Generate Sshkey (2) Copy the contents of the ~/.ssh/id_rsa.pub public key file to the Datanode of each ~/.ssh machine The/authorized_keys file. (3) test the connection from Namenode to Datanode, it should now be a pass.
Configure the configuration file for Hadoop.
(1) 在环境变量中配置HADOOP_HOME、JAVA_HOME、HADOOP_CONF_DIR,其中 HADOOP_CONF_DIR指向 $HADOOP_HOME/etc/hadoop(2) 修改 HADOOP_CONF_DIR 目录下的几个*.site.xml文件,参考官网:http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/ClusterSetup.html(3) 将配置好后的hadoop包分发到每个namenode和datanode结点,并相应的配置环境变量。(4) 剩下的在namenode结点按照官网来启动 sbin/目录下的那些shell,参考上面的官网链接。(5) 此时去每台机器用JPS命令看下,结点进程应该都起来了。(6) 访问 namenode的50070端口,应该能看到相应的结点信息。
Spark Environment Setup
Spark starts in a standalone manner, and the filesystem can rely on the HDFs file system built above. The Spark standalone setup starts as follows:
(1) 从官网:http://spark.apache.org/ 下载最新版本的spark 编译好的 tar包。 (2) 解压后参考官网配置:http://spark.apache.org/docs/latest/spark-standalone.html (3) 注意master结点和worker结点的spark安装包要在同一个linux文件系统路径下。 (4) 在master结点的 conf/slaves 文件中,将work结点的ip地址填写好,每行一个。 (5) 分别执行sbin/start-master.sh 和 sbin/start-slaves文件。 (6) 此时可以看到master结点启动了一个master进程,worker结点启动了slave进程。
Watch out.
When submitting a spark task with the following command, note that if the jar package is a local file (i.e., not put to the HDFs file system, File:/xxxx.jar local file access Protocol), This jar package is required to exist at each node and is exactly the same path. If it is already put in the HDFs file system, specify the path to the HDFs file system, for example: Hdfs://xxxx.jar
./bin/spark-submit --class <main-class> --master <master-url> --deploy-mode <deploy-mode> --conf <key>=<value> ... # other options <application-jar> [application-arguments]
Spark Standalone and HDFS system environment setup