use the shared folder mode of the file under Windows , shared to the Linux platform. Share in/mnt/hdfs/ mkdir/usr/java tar-zxvfjdk-7u60-linux-i586.tar.gz- c/usr/java Adding Java to environment variables vim/etc/profile Add the following at the end of the file export java_home=/usr/java/jdk1.7.0_60 export path= $PATH: $JAVA _home/bin Refresh configuration source/etc/profile third, install
DisplayWelcome to Ubuntu 12.10 (GNU/Linux 3.2.0-29-generic-pae i686)
* Documentation: https://help.ubuntu.com/
Last login: Sun Apr 21 11:16:27 2013 from daniel-optiplex-320.local
4. hadoop Installation
A. Download hadoop
Click Open Link
B. Decompress hadoop
tar xzvf hadoop
One of the Hadoop series: The background of big data storage and processing platform:http://mageedu.blog.51cto.com/4265610/1102191
The second Hadoop series: Big data, data processing models, and MapReduce:http://mageedu.blog.51cto.com/4265610/1105727
Three of the Hadoop series: Functional programming languages and MapReduce:http://mageedu.blog.51cto.c
Linux build Hadoop Environment 1, install JDK (1) Download and install JDK: Make sure the computer is networked after the command line enter the following command to install the JDK sudo apt-get install SUN-JAVA6-JDK (2) Configure the computer Java environment: Open/etc /profile, enter the following content at the end of the file export Java_home = (JAVA installation directory) export CLASSPATH = ".: $JAVA
;NBSP;JDK's full name of the TAR package to extract 2. Configure environment variables to modify the configuration file using the Vi/etc/profile vi/etc/profile command, Add the following: Export Java_home=/java/jdk1.8.0_73export jre_home= $JAVA _home/jreexport class_home= $JAVA _home/libexport PATH= $PATH: $JAVA _home/bin use Source/etc/profile to update the profile using Java–version to see if the success is successful, such as: hadoop User Trust 1.
Prerequisites: Hadoop is written in Java, so install Java first. Installing the JDK on Ubuntu see: http://blog.csdn.net/microfhu/article/details/7667393The Hadoop version number I downloaded is 2.4.1. Requires at least JDK 6 to be installed.Linux is the only supported production environment. Unix. Windows or Mac OS can be used as a development environment. Installing Ha
/example.txt the local file into HDFsWhen you put data into HDFs to perform hadoop processing, the process will output a new set of HDFs files to view the Hadoop fs-cat/user/chuck/pg20417.txtRead the Hadoop fs-get/user/chuck/pg20417.txt. Read the file into the current Linux directory, where the dots represent the curre
Assume that the cluster is already configured.On the development client Linux CentOS 6.5:A. The client CentOS has an access user with the same name as the cluster: Huser.B.vim/etc/hosts joins the Namenode and joins the native IP.-------------------------1. Install Hadoop cluster with the same version of JDK, Hadoop,2.Eclipse compile and install the same version o
/hadoop-2.2.0
To compile the source code, see Steps 3, 4 and 5 below.
---------------- For compile source file -----------------------
3. Download protocbuf2.5.0: https://code.google.com/p/protobuf/downloads/list, download the latest maven: http://maven.apache.org/download.cgi
Compile protocbuf 2.5.0:
Tar-xvf protobuf-2.5.0.tar.gz
Cd protobuf-2.5.0
./Configure -- prefix =/opt/protoc/
Make make install
4. install required software packages
For
Hadoop version: hadoop-0.20.2Eclipse Version: eclipse-java-helios-sr2-linux-gtk.tar.gz
======================== installation eclipse=======================
1, first download eclipse. Not much.
2. Install Eclipse(1) to extract the eclipse-java-helios-sr2-linux-gtk.tar.gz into a directory, I unzipped to the/home/wangxin
Environment: Ubuntu10.10JDK1.6.0.27hadoop0.20.2 I. JDK installation in Ubuntu: 1. download jdk-6u27-linux-i586.bin2. copy to usrjava and set object operation permissions. $. jdk-6u27-linux-i586.bin start installation 4. set the environment variable vietcprofile and add JAVA_HOME at the end of the file.
Environment: Ubuntu 10.10 JDK1.6.0.27 hadoop 0.20.2
1. Instal
1. In the general operation of Linux has LS mikdir rmdir VI operation
The general operating syntax for Hadoop HDFs is to view Hadoop and directory files for Hadoop fs-ls//** **/
Hadoop FS-LSR//*** recursively view the file directory of H
directory in Hadoop because of http://blog.csdn.net/bychjzh/article/details/7830508Add the following Configure Hadoop-1.2.1/conf/mapre-site.xml, command line:1 gedit/home/hadoop/hadoop-1.2.1/conf/mapre-site.xml.xmlAdd the following Configure Hadoop-1.2.1/conf/hdfs-site.xml,
Hadoop can be said to be quite a fire lately, and I'm interested in it, so I'm going to learn a little bit. To learn Hadoop, you must first learn to build a Hadoop pseudo-distributed environment on your computer. The first step in the pseudo-distribution mode installation step is to configure the Linux environment . My
public key file is generated in the/HOME/HADOOP/.SSH directoryc) Copy the public key file to the authorization listCat./id_rsa.pub >> Authorized_keysd) Modify file permissionschmod./authorized_keyse) Copy the authorization file Authorized_keys file to the slave nodeSCP./authorized_keys [Email protected]:~/.ssh/f) Check if the password-free login is set successfully ssh HADOOP02 see if you can log into the HADOOP02 serverThe second KindA) in the
In a previous article, it was described that if you build Hadoop0.20.2 in a fully distributed environment, you can now use this environment to complete the development.First use the Hadoop user to login to the Linux system (Hadoop users created in the previous article), and then download the Eclipse tar.gz package to/home/had
, copy the data file into it, export your project to the jar file, and add the following code to your project's main functionConf.set ("Mapred.jar", "E://freqitemset.jar");//mapred.jar cannot be changedRight-click on your project and select Run as/run configurationsClick ArgumentsAdd content from insideLee file storage path on HDFs In/data input file (local path) 3 Item set Size K1 support level thresholds out output file Click OK to connect and use your
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.