Discover cloudera hadoop installation, include the articles, news, trends, analysis and practical advice about cloudera hadoop installation on alibabacloud.com
DisplayWelcome to Ubuntu 12.10 (GNU/Linux 3.2.0-29-generic-pae i686)
* Documentation: https://help.ubuntu.com/
Last login: Sun Apr 21 11:16:27 2013 from daniel-optiplex-320.local
4. hadoop Installation
A. Download hadoop
Click Open Link
B. Decompress hadoop
tar xzvf hadoop
Inkfish original, do not reprint the commercial nature, reproduced please indicate the source (http://blog.csdn.net/inkfish). (Source: http://blog.csdn.net/inkfish)Pig is a project that Yahoo! has donated to Apache, and is currently in the Apache Incubator (incubator) stage, with the version number v0.5.0. Pig is a Hadoop-based, large-scale data analysis platform that provides the sql-like language called Pig Latin, which translates the data analysis
;NBSP;JDK's full name of the TAR package to extract 2. Configure environment variables to modify the configuration file using the Vi/etc/profile vi/etc/profile command, Add the following: Export Java_home=/java/jdk1.8.0_73export jre_home= $JAVA _home/jreexport class_home= $JAVA _home/libexport PATH= $PATH: $JAVA _home/bin use Source/etc/profile to update the profile using Java–version to see if the success is successful, such as: hadoop User Trust 1.
Create a hadoop folder under the/usr directory and grant the hadoop user permission (master)
[[emailprotected]usr]$sudomkdirhadoop[[emailprotected]usr]$ls-altotal156drwxr-xr-x.2rootroot4096Jul3100:17hadoop[[emailprotected]usr]$sudochown-Rhadoop:hadoophadoop[[emailprotected]usr]$ls-altotal156drwxr-xr-x.2hadoophadoop4096Jul3100:17hadoop
Install hadoop in the/usr
Original address: http://www.linuxidc.com/Linux/2014-03/99055.htmWe use MapReduce for data analysis. When the business is more complex, the use of MapReduce will be a very complex thing, such as you need to do a lot of preprocessing or transformation of the data to be able to adapt to the MapReduce processing mode, on the other hand, write a mapreduce program, Publishing and running jobs will be a time-consuming task.The appearance of pig makes up for this shortcoming well. Pig allows you to foc
Hadoop 2.7.1 high-availability installation configuration based on QMJ
Hadoop 2.7.1 high-availability installation configuration based on QMJ
1. Modify the Host Name and hosts file
10.205.22.185 nn1 (main) function namenode, resourcemanager, datanode, zk, hive, sqoop10.205.22.186 nn2 (standby) function namenode, resour
space)
Now, after jkd is installed, configure the environment variables below
4.3 open/etc/profile (vim/etc/profile)
Add the following content at the end:
JAVA_HOME =/usr/java/jdk1.7.0 _ 40 (the version number 1.7.40 must be modified based on the download details)
CLASSPATH =.: $ JAVA_HOME/lib. tools. jar
PATH = $ JAVA_HOME/bin: $ PATH
Export JAVA_HOMECLASSPATH PATH
4.4. source/etc/profile
4.5 verify whether the installation is successful: j
=" border-top:0px; border-right:0px; Background-image:none; border-bottom:0px; padding-top:0px; padding-left:0px; margin:0px; border-left:0px; padding-right:0px "border=" 0 "alt=" clipboard "src=" http://s3.51cto.com/wyfs02/M00/6B/F3/ Wkiol1u7nkgbqs9gaageg2yscne517.jpg "" 425 "height=" 508 "/> Specific installation links can be in reference to the steps, but there are a few points to note. Host and Slave Unified create a dedicated user to run
can be backed up), its main job is to help nn merge edits log, Reduce NN startup time SNN execution merging time fs.checkpoint.period default 3,600 seconds based on profile settings edits log size fs.checkpoint.size rules
Datanode Storage data (block) to start the DN thread will report to nn blocks information by sending the heartbeat to NN to maintain its contact (3 seconds), if nn 10 minutes did not receive the heartbeat of the DN, then think it has been lost, and copy it on the block to othe
Premise:Make sure the iptables is off and SELinux is disabled1. Prepare the hardware1 sets of Namenode and 3 DatanodeNamenode 192.168.137.100Datanode1 192.168.137.101Datanode2 192.168.137.102Datanode3 192.168.137.1032. Build Hadoop users on 4 machines (can also be other user names)Useradd Hadoop3. Install JDK 1.6 on 4 machinesPost-installation java_home on/JDKConfiguring Environment variablesVim/etc/bashrcE
"Pseudo" fractional installation of Hadoop 2.6 compared to "full" fractional installation, 99% of the operation is the same, the only difference is not to configure the slaves file, here are a few key configurations:(Install JDK, create user, set SSH password, set environment variable these preparations, you can refer to the
Hadoop pseudo-distribution is generally used for learning and testing. production environments are generally not used. (If you have any mistakes, please criticize and correct them)
1. installation environment
Install linux on windows. CenOS is used as an example. hadoop version is hadoop1.1.2.
2. configure a linux Virtual Machine
2.1 make sure that the NIC WMnet1
equivalent to Red Hat AS4.
1.2 Installation versionThis installation we choose CetOS6.0 version, the following from a few aspects of this version of the introduction.
integrates kernel-based virtualization . CentOS 6 integrates kernel-based virtualization to fully integrate the KVM hypervisor into the kernel. This feature helps CentOS 6.0 users easily migrate virtual machines between hosts and mor
1. Install Scala
A download Address: http://www.scala-lang.org/download/I choose to install the latest version of Scala-2.12.0.tgz.
b upload the compression to the/usr/local directory
C Decompression TAR-ZXVF scala-2.12.0.tgz
D Change Soft connectionLn-s scala-2.12.0 Scala
E Modifying configuration file InstallationVim/etc/profile#add by LekkoExport Scala_home=/usr/local/scalaExport Path= Path:path:scala_home/bin
F After the configuration is complete, let it take effectSource/etc/profile
G to se
Configuration environment: hadoop-1.2.1,myeclipse,centos6.5There are a lot of installation configuration information about Hadoop-eclipse on the website, but there are few things about how to configure Hadoop on MyEclipse. Since my computer is only loaded with myeclipse, I am here to record how to install the
sudo apt-get install eclipseOpen eclipse after installation, prompting for an errorAn error has occurred. See the log file/home/pengeorge/.eclipse/org.eclipse.platform_3.7.0_155965261/configuration/1342406790169.log.Review the error log and then resolveOpen the log file and see the following error! SESSION 2012-07-16 10:46:29.992-----------------------------------------------eclipse.buildid=i20110613-1736Java.version=1.7.0_05Java.vendor=oracle Corpora
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.