For more than 90% of people who want to learn spark, how to build a spark cluster is one of the greatest difficulties. To solve all the difficulties in building a spark cluster, jia Lin divides the spark cluster construction into four steps, starting from scratch, without any pre-knowledge, covering every detail of the operation, and building a complete spark cluster.
Build from scratchSparkClassic cluster:
Step 1: Build a hadoop standalone and pseudo-distributed environment;
Step 2: Construct a distributed hadoop cluster;
Step 3: Construct a distributed spark cluster;
Step 4: test the spark cluster;
This article is the first step in building a classic spark cluster. It starts from scratch to build a hadoop standalone version and a pseudo-distributed development environment, involving:
Develop basic software required by hadoop;
Install each software;
Configure the standalone mode of hadoop and run the wordcount example;
Configure the hadoop pseudo-distributed mode and run the wordcount example;
Step 1: Develop basic software required by hadoop
Our development environment is to build hadoop on Windows 7. At this time, we need vmwarevm, ubuntu iso image file, Java SDK support, Eclipse IDE platform, and hadoop installation package;
1, vmwarevirtual machine, here is the use of VMware Workstation 9.0.2 for windows, the specific is the https://my.vmware.com/cn/web/vmware/details? Downloadgroup = WKST-902-WIN & productid = 293 & rpid = 3526, as shown in:
650) This. width = 650; "src =" http://s3.51cto.com/wyfs02/M02/49/47/wKiom1QSYRnDuESWAADoROHQ8R0429.jpg "style =" float: none; "Title =" 1.png" alt = "wkiom1qsyrndueswaadorohq8r0429.jpg"/>
Save the downloaded file locally, as shown in:
650) This. width = 650; "src =" http://s3.51cto.com/wyfs02/M02/49/48/wKioL1QSYSnS6Bj2AAC5YLyyWLI307.jpg "style =" float: none; "Title =" 2.png" alt = "wkiol1qsysns6bj2aac5ylyywli307.jpg"/>
It can be seen that there is an additional keys.txt file, which is the sequence code required for vwware installation. You need to download it from the Internet;
2, ubuntu iso image file, home forest here to use the ubuntu-12.10-desktop-i386, specific: http://www.ubuntu.org.cn/download/desktop/alternative-downloads as shown in:
650) This. width = 650; "src =" http://s3.51cto.com/wyfs02/M02/49/47/wKiom1QSYRmiNFDuAADAdrkd_WY482.jpg "style =" float: none; "Title =" 3.png" alt = "wkiom1qsyrminfduaadrkd_wy482.jpg"/>
After the download, save it on the local computer as follows:
650) This. width = 650; "src =" http://s3.51cto.com/wyfs02/M00/49/48/wKioL1QSYSnS4HSMAACQnHpP0L4036.jpg "style =" float: none; "Title =" 4.png" alt = "wkiol1qsysns4hsmaacqnhpp0l4036.jpg"/>
3, Java sdksupport, the use of the latest Java jdk-7u60-linux-i586.tar.gz ", the specific http://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads-1880260.html as shown in:
650) This. width = 650; "src =" http://s3.51cto.com/wyfs02/M00/49/48/wKioL1QSYSnjkkpfAAEl0il3V58702.jpg "style =" float: none; "Title =" 5.png" alt = "wkiol1qsysnjkkpfaael0il3v58702.jpg"/>
Click Download to save it to the Ubuntu system, as shown in:
650) This. width = 650; "src =" http://s3.51cto.com/wyfs02/M00/49/47/wKiom1QSYRnCnzY8AAEkgpZBfBI464.jpg "style =" float: none; "Title =" 6.png" alt = "wkiom1qsyrncnzy8aaekgpzbfbi464.jpg"/>
4. download the latest stable version of hadoop, download is hadoop-1.1.2-bin.tar.gz ", the specific official download for the http://mirrors.cnnic.cn/apache/hadoop/common/stable/ in the Local save:
650) This. width = 650; "src =" http://s3.51cto.com/wyfs02/M01/49/48/wKioL1QSYSrwTaReAAEigAk9ucc835.jpg "style =" float: none; "Title =" 7.png" alt = "wkiol1qsysrwtareaaeigak9ucc835.jpg"/>
This article is from the spark Asia Pacific Research Institute blog, please be sure to keep this source http://rockyspark.blog.51cto.com/2229525/1551494
Spark tutorial-building a spark cluster (1)