Environment Configuration--
- Operating system: CentOS 6.5
- JDK version: 1.7.0_67
- Hadoop cluster version: CDH 5.3.0
Installation Process--
1. Installation R
Yum Install -y R
2, install Curl-devel ( very important!) Otherwise, the installation of the Rcurl package cannot be installed, and the Devtools cannot be installed)
Yum Install -y curl-devel
3, set the necessary environment variables ( Very Important! This must be set to the corresponding version of the Hadoop environment and the corresponding version of yarn, otherwise, with the Hadoop HDFS data communication will be reported to the Hadoop Connector version does not match )
VI +/etc/profile...export use_yarn=1export spark_version=1.1. 0 export Spark_yarn_version=2.5. 0-cdh5. 3.0 export Spark_hadoop_version=2.5. 0-cdh5. 3.0
4, enter the R command line, install the R package (the last step when installing SPARKR, you need to install a lot of dependent packages, the process is very long, may need to retry several times to succeed)
install. Packages ("rcurl")install. Packages (" Devtools") Library (devtools) Install_github ("amplab-extras/sparkr-pkg ", subdir="pkg")
5, finished, installed! Now read the files in HDFs with Sparkr:
Library (SPARKR) SC<-Sparkr.init (master ="Local","Rwordcount") Lines<-Textfile (SC,"Hdfs://quickstart.cloudera:8020/test/test.txt") Words<-FlatMap (lines,function(line) {Strsplit (line," ")[[1]]}) WordCount<-lapply (words,function(Word) {list (Word,1L)}) counts<-Reducebykey (WordCount,"+",2L) Output<-Collect (counts) for(Countinchoutput) { Cat(count[[1]],": ", count[[2]],"\ n")}
Resources:
- Official Documents of SPARKR
- Sparkr Installation Steps
Install and run Sparkr on CentOS