Using Linux and Hadoop for distributed computing

Source: Internet
Author: User
Tags linux

Hadoop was formally introduced by the Apache Software Foundation Company in fall 2005 as part of the Lucene sub-project Nutch. It was inspired by MapReduce and Google File System, which was first developed by Google Lab. In March 2006, MapReduce and Nutch distributed File System (NDFS) were included in projects called Hadoop.

Hadoop is the most popular tool for classifying search keywords on the Internet, but it can also solve many of the problems that require great scalability. For example, what would happen if you were to grep a 10TB mega file? On a traditional system, this will take a long time. But Hadoop is designed to take these issues into account, and thus greatly improve efficiency.

Prerequisite

Hadoop is a software framework that enables distributed processing of large amounts of data. But Hadoop is handled in a reliable, efficient, scalable way. Hadoop is reliable because it assumes that the compute element and store will fail, so it maintains multiple copies of the work data, ensuring that the processing can be redistribution for failed nodes. Hadoop is efficient because it works in parallel and speeds up processing through parallel processing. Hadoop is also scalable and can handle PB-level data. In addition, Hadoop relies on the community server, so its cost is low and can be used by anyone.

As you may have thought, Hadoop is ideal for running on a Linux production platform because it has a framework written in the Java™ language. Applications on Hadoop can also be written in other languages, such as C + +.

Hadoop Architecture

Hadoop is composed of many elements. At the bottom is the Hadoop Distributed File System (HDFS), which stores files on all storage nodes in the Hadoop cluster. The upper layer of HDFS (for this article) is the MapReduce engine, which consists of jobtrackers and tasktrackers.

Hdfs

For external clients, HDFS is like a traditional hierarchical file system. You can create, delete, move, or rename files, and so on. But HDFS's architecture is built on a specific set of nodes (see Figure 1), which is determined by its own characteristics. These nodes include the Namenode (only one) that provides the metadata service within the HDFS; Datanode, which provides a storage block for HDFS. Because there is only one namenode, this is a disadvantage of HDFS (single point failure).

Figure 1. A simplified view of the Hadoop cluster

Files stored in HDFS are partitioned into blocks and then replicated to multiple computers (Datanode). This is very different from the traditional RAID architecture. The size of the block (typically 64MB) and the number of blocks copied are determined by the client when the file is created. Namenode can control all file operations. All communications within the HDFS are based on the standard TCP/IP protocol.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.