The hadoop development cycle is generally:
1) Prepare the development and deployment Environment
2) Write Mapper and reducer
2)Unit Test
3)Compile and Package
4) submit jobs and search results
Before using hadoop to process big data, you must first deploy the running and development environments. The following describes the installation process of the basic environment. All the software is installed on the Linux system. The following describes the deployment on a machine. The machine information is as follows:
1 JDK Installation
1) download the latest JDK and decompression jdk-7u17-linux-x64.gz
2) Set Java environment variables
Switch to the root directory of the root user, edit the. bashrc file, and add the following statement at the bottom of the file:
Export java_home =/Opt/jdk1.7 (Changeable)
Export classpath = $ classpath: $ java_home/JRE/lib: $ java_home/lib:
Export Path = $ java_home/bin: $ path
Re-execute the modified File
# Source/root/. bashrc
3) test whether the installation is successful
Run Java-version on the terminal. If the displayed version is the same as the installed version, it indicates that the Java settings are successfully installed.
2. Install eclipse
Later, you need to program er and reducer functions in eclipse. Now, we have set up the IDE environment.
1) download the latest version of eclipse
Eclipse-jee-juno-SR2-linux-gtk-x86_64.tar.gz
2) decompress the tar.gz File
3) Use VNC on Windows to start eclipse
Go to the file after decompression, click the eclipse icon or execute the following command on the terminal
./Eclipse
Set working directory
4) solve common problems
If the following problems occur, but the installed JDK is later than 1.5
You need to specify the JVM used when starting eclipse. For the convenience of writing a script (starteclipse. Sh) to start eclipse, the script is as follows:
Run the following script:Chmod 777 starteclipse. Sh
Execute this script and eclipse will start normally.
3 hadoop Installation
Hadoop has three installation modes: Standalone mode,Pseudo distribution mode and fullDistribution Mode.
3.1Install
1) download the latest version of hadoop and decompress it.
% Tar xzf hadoop-x.y.z.tar.gz
2) hadoop path Configuration
Switch to the root directory of the root user, edit the. bashrc file, and add the following statement at the bottom of the file:
(
Export hadoop_install =/home/Tom/hadoop-x.y.z
Export Path = $ path: $ hadoop_install/bin
Re-execute the modified file (if this step is not completed, sometimes the following error occurs when you enter the hadoop command: hadoop commondnot found)
# Source/root/. bashrc,
3.2 Configuration
The core-site.xml is used to configure the properties of the common component, the hdfs-site.xml is used to configure the HDFS properties, and the mapred-sit.xml is used to configure the mapreduce implementation. These configuration files are all placed in the conf subdirectory.
Standalone or Local Mode)
A single machine and single thread run, hadoop process does not need to be started, allProgramAll are executed on a single JVM. This mode is suitable for testing and debugging mapreduce programs in the development phase.
Pseudo Distribution Mode)
Start all hadoop processes (such as namenode, datanode, tasktracker, jobtracker, and secondarynamenode) on a single machine to better simulate hadoop clusters.
AllDistribution Mode
You need to use multiple machines to implement a hadoop distributed cluster and perform integrated testing in a high-simulation environment.
To run hadoop in a specific mode, you need to set the attributes correctly and start the hadoop process. Is the minimum set of attributes required for configuring different modes.