A few things you need to know about Hadoop

Source: Internet
Author: User
Keywords Can and through provide very

In today's technology world, big Data is a popular it buzzword. To mitigate the complexity of processing large amounts of data, Apache developed a reliable, scalable, distributed computing framework for hadoop--. Hadoop is especially good for large data processing tasks, and it can leverage its distributed file systems, reliably and cheaply, to replicate data blocks to nodes in the cluster, enabling data to be processed on the local machine. Anoop Kumar explains the techniques needed to handle large data using Hadoop in 10 ways. For importing/exporting data from HDFS, Anoop points out that in the world of Hadoop, data can be imported from a variety of different sources into the Hadoop Distributed File System (HDFS). After importing data into the HDFS, the data is processed at a level by using MapReduce or other languages such as hive, pig, etc.

The Hadoop system provides not only the flexibility to handle large amounts of data, but also the filtering and aggregation of data, and processing of transformed data can be exported to external databases or other databases using Sqoop. Exporting data from other databases such as MySQL, SQL Server, or MongoDB is also a powerful feature. The benefit is that the data can be better controlled.

The second aspect is data compression in HDFs, data in Hadoop is stored on HDFS, and data compression and decompression are supported. Data compression can be achieved by some compression algorithms, such as BZIP2, gzip, Lzo, etc. Different algorithms can be used in different situations according to their functions, such as the speed of compression/decompression or the ability of file segmentation.

In the transformation of Hadoop, Hadoop is an ideal environment for extracting and transforming large amounts of data. At the same time, Hadoop provides an extensible, reliable, and distributed processing environment. By using MapReduce, Hive, and pig, you can extract and transform data in a number of ways.

Once the input data is imported or placed into HDFs, the Hadoop cluster can then be used to convert large datasets in parallel. As mentioned earlier, data transformations can be implemented using the tools available. For example, if you want to convert data to a tab-delimited file, MapReduce is one of the best tools. Similarly, hive and Python can be used to clean up and transform data on geographic events.

As for how to achieve a common task, Anoop said, there are many common tasks that need to be done in the day-to-day processing of data, and the frequency is high. Some of the languages available, such as Hive, pig and mapreduce, can help you accomplish these tasks and make your life easier.

Sometimes a task can be implemented in many ways. In this case, the developer or architect will have to make the right decision to implement the most correct scenario. For example, hive and pig provide an abstraction layer between a stream of data and a query, and provide the Mapreduc workflow that they generate. The MapReduce feature can be used to extend the query. Hive can be used to create and analyze data using HIVEQL, a descriptive language like SQL. Also, the pig language can be exploited by writing operations in pig correlation.

In the case of Hadoop combining large amounts of data, in general, to get the final result, the data needs to be processed and combined together with multiple datasets. There are many ways to add multiple datasets to Hadoop. The MapReduce provides a data connection to the map end and the reduce end. These connections are nontrivial and can be very expensive operations. Pig and Hive also have the same ability to apply to connect to multiple datasets. Pig provides a replication connection, a merge connection and an oblique connection (skewed join), and Hive provides a map-side connection and a full external connection to analyze the data. An important fact is that by using various tools, such as mapreduce, pig, and hive, data can be used based on their built-in capabilities and actual requirements.

How to analyze a large amount of data in Hadoop, Anoop points out that, in a world where large data is/hadoop, some problems may not be complicated, and the solution is straightforward, but the challenge is the amount of data. In this case, different solutions are needed to solve the problem. Some analysis tasks are to count the number of clear IDs in the log files, to transform the stored data within a specific date range, and to rank users. All of these tasks can be addressed through a variety of tools and techniques in Hadoop such as MapReduce, Hive, Pig, Giraph, and Mahout. These tools have the flexibility to extend their capabilities with the help of custom routines.

For example, diagrams and machine learning problems can be solved by using a giraph framework rather than through mapreduce tasks, which avoids writing complex algorithms. The Giraph framework is more useful for solving diagrams and machine learning problems than for mapreduce tasks, because some problems may need to be solved using iterative steps.

Debugging in the Hadoop world is always an important process in any development process. The need for debugging in a Hadoop environment is as important as the need for Hadoop itself. There is a saying that format errors and unexpected input are very common, which will cause all transactions to be interrupted on a higher scale. This is also an unfortunate disadvantage in dealing with large-scale unstructured data.

Although a single task is isolated and gives input to different groups, it needs to understand the status of each task when tracking various events. This can be accomplished by supporting the process of debugging the Hadoop task through a variety of available tools and technologies. For example, to avoid any job failure, there is a way to skip bad records, and you can use the counters in MapReduce to track Bad records.

Easy to control Hadoop system, product development is an important activity, system maintenance is equally important, it helps to determine the future of the product. In Hadoop, environmental settings, maintenance, and environmental monitoring, as well as processing and tuning mapreduce tasks, are very much needed to benefit from the Hadoop system. For this hadoop provides a lot of flexibility to control the entire system, and Hadoop can be configured in three different modes: standalone mode, pseudo distributed mode, and full distributed mode.

With the help of the ganglia framework, the entire system can be monitored and the node's health status tracked. In addition, the parameter Configuration feature provides the task control for MapReduce. The Hadoop system has the flexibility to easily handle the level control of the entire system.

Extensible persistence. There are many options to handle massive structured and unstructured data, but the scalability of storing massive amounts of data remains one of the major problems in the data world. The Hadoop system intends to use Accumulo to mitigate this problem. Accumulo is inspired by Google's bigtable design and built on Hadoop, zookeeper, and thrift, and it provides Hadoop with scalable, distributed, and cell-persistent data backups. Acumulo brings some improvements on the bigtable design, with a unit-based access control and server-side programming mechanism to help modify key/value pairs at different points in the data management process.

Data reads and writes in Hadoop occur on HDFs. HDFs is the Distributed file system of Hadoop, and it is a fault-tolerant distributed file system. It optimizes large files for file stream reading and prefers low latency compared to I/O throughput. There are many ways to efficiently read and write files from HDFs, such as API file systems, MapReduce, and advanced serialization libraries.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.