Why is the eclipse plug-in for compiling Hadoop1.x. x so cumbersome?
In my personal understanding, ant was originally designed to build a localization tool, and the dependency between resources for compiling hadoop plug-ins exceeds this goal. As a result, we need to manually modify the configuration when compiling with ant. Naturally, you need to set environment variables, set classpath, add dependencies, set the main function, javac, and jar configur
1. hadoop version Introduction
Configuration files earlier than version 0.20.2 (excluding this version) are in default. xml.
Versions later than 0.20.x do not include jar packages with Eclipse plug-ins. Because eclipse versions are different, you need to compile the source code to generate the corresponding plug-ins.
0.20.2 -- 0.22.x configuration files are concentrated inConf/core-site.xml,Conf/hdfs-site.xmlAndConf/mapred-site.xml..
In versi
The previous several are mainly Sparkrdd related foundation, also used Textfile to operate the document of this machine. In practical applications, there are few opportunities to manipulate common documents, and more often than not, to manipulate Kafka streams and files on Hadoop.
Let's build a Hadoop environment on this machine. 1 Installation configuration Hadoop
Detailed description of hadoop operating principles and hadoop principles
Introduction
HDFS (Hadoop Distributed File System) Hadoop Distributed File System. It is based on a paper published by google. The paper is a GFS (Google File System) Google File System (Chinese and English ).
HDFS has many features:
① Multiple c
Previously introduced me in Ubuntu under the combination of virtual machine Centos6.4 build hadoop2.7.2 cluster, in order to do mapreduce development, to use eclipse, and need the corresponding Hadoop plug-in Hadoop-eclipse-plugin-2.7.2.jar, first of all, before the hadoop1.x in the official Hadoop installation package is self-contained Eclipse plug-in, Now with
Basic software and hardware configuration:
X86 desktop, window7 64-bit system vb Virtual Machine (x86 desktop at least 4G memory, in order to open 3 virtual machines) centos6.4 operating system hadoop-1.1.2.tar.gz
Jdk-6u24-linux-i586.bin
1. configuration under root
A) modify the Host Name: vi/etc/sysconfig/network
Master, slave1, slave2
B) Resolution Ip Address: vi/etc/hosts
192.168.8.100 master
192.168.8.101 slave1
1. Introduction to Hadoop versionConfiguration files that were previously in the 0.20.2 version (without this version) are in Default.xml.The 0.20.x version does not contain the Eclipse plug-in jar package, because the eclipse version is different, so you need to compile the source code to generate the corresponding plug-in.The 0.20.2--0.22.x version of the configuration file is centralized in conf/core-site.xml, conf/hdfs-site.xml , and conf/mapred-s
01_note_hadoop introduction of source and system; Hadoop cluster; CDH FamilyUnzip Tar Package Installation JDK and environment variable configurationTAR-XZVF jdkxxx.tar.gz to/usr/app/(custom app to store the app after installation)Java-version View current system Java version and environmentRpm-qa | grep Java View installation packages and dependenciesYum-y remove xxxx (remove grep out of each package)Configure the environment variable/etc/profile, an
1. Create a userAddUser HDUserTo modify HDUser user rights:sudo vim/ect/sudoers, add HDUser all= (All:all) all in the file. 2. Install SSH and set up no password login1) sudo apt-get install Openssh-server2) Start service: SUDO/ETC/INIT.D/SSH start3) Check that the service is started correctly: Ps-e | grep ssh 4) Set password-free login, generate private key and public keySsh-keygen-t rsa-p ""Cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys 5) Password-free login: ssh localhost6) Exit3. Config
The first 2 blog test of Hadoop code when the use of this jar, then it is necessary to analyze the source code.
It is necessary to write a wordcount before analyzing the source code as follows
Package mytest;
Import java.io.IOException;
Import Java.util.StringTokenizer;
Import org.apache.hadoop.conf.Configuration;
Import Org.apache.hadoop.fs.Path;
Import org.apache.hadoop.io.IntWritable;
Import Org.apache.hadoop.io.Text;
Import Org.apache.hadoop.map
VirtualBox build Pseudo-distributed mode: Hadoop Download and configurationAs a result of personal machine slightly slag, unable to deploy Xwindow environment, direct use of the shell to operate, want to use the mouse to click the operation of the left do not send ~1.hadoop Download and decompressionhttp://mirror.bit.edu.cn/apache/hadoop/common/stable2/
in ~/.ssh/: Id_rsa and id_rsa.pub; These two pairs appear, similar to keys and locks.Append the id_rsa.pub to the authorization key (there is no Authorized_keys file at this moment)$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys(3) Verify that SSH is installed successfullyEnter SSH localhost. If the display of a native login succeeds, the installation is successful.3. Close the firewall $sudo UFW disableNote: This step is very important, if you do not close, there will be no problem finding D
As you know, Namenode has a single point of failure in the Hadoop system, which has been a weakness for high-availability Hadoop. This article discusses several solution that exist to solve this problem. 1. Secondary NameNode principle: secondary NN periodically reads the editlog from the NN, merging with the image that it stores to form a new metadata image advantage: The earlier version of
Part 1: core-site.xml • core-site.xml is the core attribute file of hadoop, the parameter is the core function of hadoop, independent of HDFS and mapreduce. Parameter List • FS. default. name • default value File: // • Description: sets the hostname and port of the hadoop namenode. The default value is standalone mode. If it is a pseudo-distributed file system, i
Hadoop has always been the technology I want to learn, just as the recent project team to do e-mall, I began to study Hadoop, although the final identification of Hadoop is not suitable for our project, but I will continue to study, more and more do not press.The basic Hadoop tutorial is the first
Reprinted from http://blessht.iteye.com/blog/2095675Hadoop has always been the technology I want to learn, just as the recent project team to do e-mall, I began to study Hadoop, although the final identification of Hadoop is not suitable for our project, but I will continue to study, more and more do not press.The basic Hadoop tutorial is the first
Hadoop Study Notes 0004 -- Eclipse installation Hadoop Plugins1 , download hadoop-1.2.1.tar.gz , unzip to Win7 under hadoop-1.2.1 ;2 , if hadoop-1.2.1 not in Hadoop-eclipse-plugin-1.2.1.jar package, on the internet to download d
Hadoop can be run in stand-alone mode or in pseudo-distributed mode, both of which are designed for users to easily learn and debug Hadoop, and to exploit the benefits of distributed Hadoop, parallel processing, and deploy Hadoop in distributed mode. Stand-alone mode refers to the way that
Compile the hadoop 2.x Hadoop-eclipse-plugin plug-in windows and use eclipsehadoopI. Introduction
Without the Eclipse plug-in tool after Hadoop2.x, we cannot debug the code on Eclipse. We need to package MapReduce of the written java code into a jar and then run it on Linux, therefore, it is inconvenient for us to debug the code. Therefore, we compile an Eclipse plug-in so that we can debug it locally. Afte
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.