Prerequisites: JDK and Hadoop have been installed in the Linux systemInstallation environment for this article: 1.arch Linux 2. hadoop1.0.1 Local pseudo distribution mode installation 3. Eclipse 4.51. Download the Linux version of Eclipse (Http://www.eclipse.org/downloads/?osType=l
Continue to install Hadoop-related environments immediately preceding the previous article JDK Installation:1. Download, the following two addresses found on the Internet, can be directly downloaded:Http://download.oracle.com/otn-pub/java/jdk/7/jdk-7-linux-x64.tar.gzhttp://download.oracle.com/otn-pub/java/jdk/7/jdk-7-linux-x64.rpm2. Install, upload the downloaded
Immediately after the completion of the installation and running of Hadoop, it is time to run the relevant example, and the simplest and most straightforward example is the HelloWorld-wordcount example. Follow the blog to run: http://xiejianglei163.blog.163.com/blog/static/1247276201443152533684/ First create a folder, and create two files, directory arbitrary, for the following file structure: Examples --file1.txt --file2.txt The contents of the do
the metadata first2 starting three Journalnode processeshadoop-daemon.sh Start Journalnode3 Formatting NamenodePerformed on a namenode:HDFs Namenode-formatThis step will connect the Journalnode, and the Journalnode will be formatted as well.4 start HDFs on the namenode that you just formatted:CD $HADOOP _home/sbin;./start-dfs.sh5 Perform on another namenode:HDFs Namenode-bootstrapstandby6 Verifying manual fail overExecute on any one of the Namenode:H
/usr/local/hadoop/dfs/data this file folder has a lock, that is, insufficient access rights.The workaround is to modify the folder permissions: CHOMD g-w/usr/local/hadoop/dfs/data2. Error in log:java.io.IOException:Incompatible clusterids In/usr/local/hadoop/dfs/data:namenode Clusterid = CID-C1BF781C-D589-46D7-A246-7F64A6F24BC1; Datanode Clusterid = cid-b1ee6a5b-
modified, can be configured on a machine before distribution to each nodeHadoop-0.20.2/conf/master the IP that holds the master nodeHadoop-0.20.2/conf/slaves store IPCA masters and slaves files from the node4) hadoop-0.20.2/conf/core-site.xml Configure HDFS path, temp directory and other information, can be modified as shown in the configuration5) Hadoop-0.20.2/conf/mapred-site.xml Configure the map reduce
>5 Property>6 Configuration> Mapred-site.xml1 Configuration>2 Property>3 name>Mapred.job.trackername>4 value>localhost:9001value>5 Property>6 Configuration> After saving, you can add Hadoop to the environment variable or not add it,Then click on the previously said configuration SSH free LoginFinally, the process of bin/start-all.sh starting Hadoop under HadoopAt this t
). Save: Press "ESC" key First, then ": Wq". (4). Do not Save: Press ": q!".
9. Modify File Permissions:
chmod (for example: $ chmod u+w/etc/profiles $chmod 777/etc/profiles (the file permission is All rights)) file permissions are: R (readable) w (writable) x (executable)
10. Rename the directory or file:
$ MV (such as $ MV software softwares)
11. Move the directory or file:
$ MV (such as $ MV software/hadoop
Prerequisites: Hadoop is written in Java, so install Java first. Installing the JDK on Ubuntu see: http://blog.csdn.net/microfhu/article/details/7667393The Hadoop version I downloaded is 2.4.1, which requires at least JDK 6 to be installed. Linux is the only supported production environment, unix,windows or Mac OS can be used as a development environment. Install
Note: For more information on how to import Hadoop source into eclipse see HTTP://PAN.BAIDU.COM/S/1EQCCDCM
First, Hadoop configuration software (My computer is Windows7 flagship--64bit)
1. VMware dedicated CentOS image (CentOS is one of the Linux operating systems)
2. Vmware-workstation10
3. hadoop
Fileoutputformat.setoutputpath (Job, New Path (Out_path));
Submit the job to Jobtracker run Job.waitforcompletion (true); }
}
1. Select the program entry to be packaged in the Eclipse project and click the right button to select Export
2. Click the jar file option in the Java folder
3. Select the Java file to be beaten into a jar package and the output directory of the jar package
4. Click Next
5. Select the entry of the program, click Finish
6. Copy the jar package to the
click the Xshell xftp link.
/var/www/-IVH jdk-7u67-linux-x64.rpm#查看版本Java-version
Eight: Modify the hosts
This is to be changed in every system.
#编辑hosts /etc/hosts
Nine: SSH settings
Enter H30 to see if SSH is installed, and if so, continue without installation.
rpm-qa| grep SSH
Create the. SSH directory. Viewing the file, the first letter D is a directory, followed by permissions, such as the creator, the average per
Shell script -- run hadoop on linux Terminal -- the java file is saved as test. sh. the java file is wc. java, [Note: it will be packaged into 1. jar, the main function class is wc, the input directory address on hdfs is input, and the output directory address on hdfs is output. [note: The input directory and output directory are not... shell script -- run hadoop
The first step:Hadoop requires Java support, so you need to install Java firstSee if Java is installedTo view a command:Java -version JRE (Java Runtime Environment), which is the environment you need to run a Java-language-based application normally.installation command:Apt-get Install Default-jreTo see if the installation was successfulJava -version If a text similar to the following appears, the installation is successfulJava Version "1.7.0_131"Step Two:Download
1: Environment Preparation1 Linux servers, Hadoop installation packages (Apache official website download) jdk1.6+2: Install the JDK, configure the environment variables (etc/profile), and java-version test the next step correctly. 3: Configure SSH password-free login CD ~ ssh-keygen-t RSA generate key, located in ~/.ssh directoryCp~/.ssh/id_rsa.pub ~/.ssh/authorized_keys theid_rsa.pub Public key file CP to
The installation method described below is installed online, if you need to connect the network please refer to: Linux: web Hosting with VMware internal Linux14.04 virtual machine (static IP) connected by bridging mode
Environment:
Os:linux Ubuntu14.04 Server X64;
Server list:
192.168.1.200 Master
192.168.1.201 Node1
192.168.1.202 Node2
192.168.1.203 Node3
Installing the SSH Service
To te
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.