1.Hadoop source from--lucene project
Hadoop is open source software written in the Java language by Doug Cutting, which implements a full-text search feature similar to Google, which provides the architecture of two full-text search engines, including the full query engine and indexing engine.
Earlier versions of Hadoop were released on the personal site and at the end of sourceforge,2001 as a sub-project of the Apache Software Foundation Jakarta.
Lucene is designed to provide software developers with a simple and easy-to-use toolkit to facilitate full-text retrieval in the target system, or to build a full full-text search engine on this basis.
For a large number of scenarios, Lucene faces the same difficulties as Google, forcing Doug cutting to learn and imitate Google's approach to solving these problems.
Doug Cutting first developed a miniature version of the project: Nutch
2. From Lucene to Nutch, from Nutch to H
Adoop
In 2003-2004, Google disclosed the details of some of the GFS and MapReduce ideas, based on Doug Cutting, who spent 2 years in his spare time implementing the DFS and MapReduce mechanisms to make Nutch performance soar.
Yahoo pacified Doug Cutting and its project.
Hadoop was formally introduced to the Apache Foundation in the fall of 2005 as part of Lucene's subproject Nutch. In March 2006, Map-reduce and Nutch distributed File System (NDFS) were included in the project called Hadoop, which was named after Doug Cutting's son's toy elephant.
Hadoop Core: MapReduce (distributed Computing Program) and HDFS (Distributed File Management system)
architecture of the 3.Hadoop
650) this.width=650; "Src=" http://img.blog.csdn.net/20140109162401000?watermark/2/text/ ahr0cdovl2jsb2cuy3nkbi5uzxqvene5mde3mtk3/font/5a6l5l2t/fontsize/400/fill/i0jbqkfcma==/dissolve/70/gravity/ Center "style=" border:none; "/>
Hadoop main components: Namenode, secondary Namenode, DataNode, Jobtracker, Tasktracker
(1) Namenode
HDFs Daemon
How the record files are partitioned into chunks, and on which nodes the data blocks are stored
Centralized management of memory and I/O
is a single point, failure will cause the cluster to crash
(2) Secondary Namenode
Auxiliary daemon for monitoring HDFs status
Each cluster has a
Communicate with Namenode to save HDFs metadata snapshots on a regular basis
When a namenode fault can be used as a standby namenode
(3) DataNode
Each slave server runs a
Responsible for reading and writing HDFS data blocks to the local file system
(4) Jobtracker
Background program for processing jobs (user-submitted code)
Decide which files are involved and then cut the task and assign nodes
Monitoring task, restarting failed task (on different nodes)
Only one jobtracker per cluster, located on the master node
(5) Tasktracker
Located on the slave node, combined with Datanode (the principle of code and data)
Manage tasks on the respective nodes (assigned by Jobtracker)
There is only one tasktracker per node, but one tasktracker can start multiple JVMs to perform the map or reduce tasks in parallel
Interacting with Jobtracker
(6) Master and slave
Master:namenode, secondary Namenode, Jobtracker.
Slave:tasktracker, Datanode
Master is not the only
Introduction to source and system of Hadoop