Hadoop

Want to know hadoop? we have a huge selection of hadoop information on alibabacloud.com

Hadoop gets the file name of input file inside mapper

Well, I admit it's cool to use Hadoop to handle big data. But sometimes I get frustrated when I do marshalling project. Many times we use a join in a map-reduce task, so the entire job's input may be more than two files (in other words: Mapper to process more than two files). How to handle multiple input with mapper: Multiple mapper: Each mapper process the corresponding input file Https://gi ...

Research on the optimization method of Hadoop performance based on scheduler

Research on the optimization method of Hadoop performance based on scheduler Liu; He Chen; Tang Hong in order to improve the scheduling performance of Hadoop Scheduler and shorten the task overall response time of Hadoop cluster, a dynamic scheduling improvement algorithm based on CPU occupancy rate is proposed. Firstly, this paper compares the traditional performance optimization methods of Hadoop, and points out that the key problem is the lack of dynamic and flexibility. On the basis of this, this paper analyzes the default task scheduling model of Hadoop, and puts forward a kind of CPU occupancy rate as the load index, which is the root when the task is distributed.

Luoma Water quality classification by parallel BP Neural network in Hadoop

Luoma classification of water quality by parallel BP Neural network under Hadoop Shungua Shao Xiaogen Bao Xu Delan Wang Hai Study The advantages of cloud computing to data migration mechanism and mapreduce parallel processing of massive data, to solve the problem of BP neural network in processing large sample data, The bottleneck problem of the network training time is long. This paper constructs a network model of multiple pollution factors affecting the Luoma water quality, and uses parallel BP network algorithm under Hadoop to realize the classification of Luoma water quality and the results of mining analysis have decision support for Luoma water quality optimization and ecological restoration.

Let Hadoop run Savanna on top of OpenStack

Apache Hadoop is now widely adopted by organizations as the industry standard for MapReduce implementations, and the Savanna project is designed to allow users to run and manage Hadoop over OpenStack. Amazon has been providing Hadoop services over EMR (Elastic MapReduce) for years. Savanna needed information from users to build clusters such as Hadoop's version, cluster topology, node hardware details, and some other information. In mentioning ...

With hadoop calculate PI value, hadoop calculate PI value

First, calculate the PI value of the way and principle & nbsp; Baidu, the method of calculating the PI really a lot. But in the Hadoop examples of code comments Write: Is the use of Quasi-Monte Carlo algorithm to estimate the value of PI ...

Microsoft publishes a second preview of the Hadoop large data service based on Azure

Microsoft has updated the Hadoop based on Microsoft Azure in the most recently published SQL Server 2012. While Microsoft released the latest version of SQL Server last week, Microsoft also announced its second preview of the Hadoop large data service based on Windows Azure. Micrsoft Many of the new features and new services in SQL Server 2012 are based on Microsoft customers ...

Parallel cloning code detection under Hadoop cluster

Parallel cloning code detection in the Hadoop cluster Yiuchong cloning code can cause project maintenance difficulties, weaken the robustness of the project, and the bugs contained in the cloned code can damage the entire project. The current Cloning code detection technology is either limited to detecting only a few clones of code, or requires very high detection time. And if you need to detect a lot of source code, the main memory of a machine may not be able to store all the information. The possibility research on the parallel operation of cloning code detection technology, using the Cloning Code detection technology based on program dependency graph, ...

"Big Cloud" Hadoop platform and its application

"Big Cloud" Hadoop platform and application Wang Baozhu China Mobile Research Institute of Cloud Computing Systems Wang Baozhu teacher introduced the application of Hadoop in the cloud computing platform of China Mobile. It is reported that China Mobile "big cloud" cloud computing platform mainly contains two parts-paas layer, IaaS layer. And Hadoop is primarily deployed at the PAAs level. "Big Cloud" Hadoop platform and its application

Hortonworks announces a Hadoop data platform

Hortonworks, founded in July 2011 by Yahoo! and Benchmark Capital, announced a technology preview of a data platform based on Hadoop. The company employs the core of many Hadoop projects to provide appropriate support and training. Shortly after IBM announced a large data analysis platform based on Hadoop, a new but very important role--hortonworks, and already started to play on their hortonworks data flat ...

The Hadoop process takes up 800% of the CPU

First, the phenomenon of Hadoop process CPU accounted for 800% second, check the question 1, top Z highlight H display thread <> page 2, jstatack sudo-u admin Jstack 97932 | Pager 2014-03-20 21:45:45 Full thread dump OpenJDK (Taobao) 64-bit Server VM ...

Discussion on key technology standardization of large data analysis based on Hadoop platform

Discussion on the standardization of the key technology of large data analysis based on Hadoop platform Koh Yangqingping Huang Analysis of the standardization of key technologies of large data analysis based on Hadoop platform, from data, parallel computing framework, analysis results output, parallel data analysis algorithm four aspects of standardized analysis and research, The standardization of four aspects, including the architecture model, and the related APIs are proposed. Keywords-large data analysis, computational framework, parallel analysis algorithm, Hadoop based on H ...

Problems that Hadoop cannot solve

Because of the needs of the project, learning to use Hadoop, as with all the overheated technology, "big Data", "mass" such words on the internet over the sky flying. Hadoop is a very good distributed programming framework that is exquisitely designed and does not currently have the same level of weight as a substitute. It also touches on an internally used framework that encapsulates and customizes Hadoop, making it more responsive to business requirements. I also recently wanted to write some of the learning and use of Hadoop experience, but see the internet so flooded articles, I think to write a little note the same thing is really not ...

Research on remote sensing Digital image processing method based on Hadoop

Research on remote sensing Digital image processing method based on Hadoop northeast Zhou based on the Hadoop cloud computing system, this paper mainly uses the Parallel programming framework MapReduce realize the enhancement processing of remote sensing digital image and clustering the enhanced image, and compares it with PC serial processing. In view of the low overall brightness and poor visual effect of remote sensing digital image, the traditional image enhancement method can not achieve the visual interpretation of human eye comfort and the problem of subsequent processing.

Research on the effect of compression on Hadoop performance

Research on the effect of compression on Hadoop performance Shanglihui, Miuli compression is an important method of I/O tuning, which reduces the I/O calculation load, thereby improving I/O performance. Today, disk I/O can never grow faster than the CPU speed of Moore's law, so I/O is often a bottleneck in data processing. In Hadoop, how to use compression for I/O tuning has not been fully studied. Through experiments, this paper draws a compression strategy to help users of Hadoop to determine when and where ...

cascading--data Processing API for Hadoop MapReduce

The core concept of the cascading API is piping and streaming. A pipeline is a series of processing steps (parsing, looping, filtering, and so on) that define the data processing to be performed, and the flow is the union of pipelines with data sources and data receivers (Data-sink). Cascading is a new data processing API for Hadoop clusters that uses expressive APIs to build complex processing workflows, and ...

Energy efficient job submission in Hadoop cluster

Energy efficient and Reliable Job submission in Hadoop clusters Sudha sadasivam S Sangeetha Radhakrishnan This monitors addresses The problem of block allocation in Distributed File system ...

Yahoo! Application based on Hadoop

Recently, the Chinese Academy of Sciences, the host of the "Hadoop China 2010 Cloud Computing Conference" held in Beijing, this year is the fourth session held. Many companies, including Baidu, Taobao and mobile, have shown applications based on Hadoop.         At this conference, Yahoo! 's Milind Bhandarkar made a keynote speech, which is an excerpt from the presentation. 1234 Next Page

A mass medical image retrieval system based on Hadoop

A mass medical image retrieval system based on Hadoop fan Xu Sheng to improve the efficiency of mass medical image retrieval, the defect of single node medical image retrieval System This paper presents a mass medical image retrieval system based on Hadoop. Firstly, using Brushlet transform and local two-value mode algorithm to extract medical sample image features and storing image feature inventory in Hadoop Distributed File System (HDFS) And then using map to match the features of the sample image with the feature library, reduce receives the map task ...

Expand Hadoop Category

October may be the one months worth noting in the history of large data. Because in this month, we can redefine Hadoop. It can be a research framework of large data batch processing, and it can also be a high speed engine and interactive analysis product of structured and unstructured data large-scale parallel analysis data. > already have companies trying to prove it. Recently, the industry held a special meeting to promote Hadoop itself Hadoop-plus-sql architecture, add advanced analysis function, through the image display and so on. Among them, three ...

Deploy Spark onto Hadoop 2.2.0

This article describes how to deploy Apache to Hadoop 2.2.0, http://www.aliyun.com/zixun/aggregation/14417.html". If your Hadoop is another version, such as CDH4, you can refer directly to the official Explain the operation. Need to pay attention to two points: (1) the Hadoop must be 2.0 series, such as 0.23.x, 2.0.x, 2.xx or CDH4, CDH5 ...

Total Pages: 9 1 2 3 4 5 .... 9 Go to: Go

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.