Probe into the rapid recovery measures of regionserver collapse in hbase cluster

Probe into the rapid recovery measures after regionserver collapse in hbase cluster ma June suffiad Pei Wenbin mainly introduced the interactive process between HBase Regionserver and zookeeper, expounded the recovery mechanism after the regionserver collapse, On the basis of the above, some optimized recovery measures are put forward. The optimized recovery measures greatly shorten the fault recovery time and business interruption time after the Regionserver crash, thus improving the stability and reliability of the HBase cluster. HBase cluster ...

Design of HDFs hybrid encryption protection scheme

Design of HDFs hybrid encryption protection Scheme Liang Sheng Qin Song Lei How to solve the security problems in cloud computing effectively is the key to the development of cloud computing industry. Aiming at the security problem of the Hadoop cloud computing system in data sharing, a hybrid encryption protection scheme based on RC4 and RSA is adopted, which is closely combined with the characteristics of the Hadoop cloud computing system in the process of cloud storage data sharing, and it can realize the security sharing of the data, and both the confidentiality and the efficiency. Design of HDFs hybrid encryption protection scheme

Research and optimization of mapreduce high availability

Research and optimization of mapreduce high availability Huang Weijian Zhou Yi Love in order to improve the usability of MapReduce, a distributed jobtracker Node model optimization scheme is proposed. One of the core technologies of Hadoop mapreduce the job scheduling process in the programming model, points out the performance bottleneck caused by single jobtracker, and puts forward the corresponding optimization scheme. The original mapreduce of the single Jobtracke ...

Improvement of Aprioi algorithm under MapReduce frame

Improvement of Aprioi algorithm under MapReduce frame Wang Wang Junhong Yu Jiao Gedommei mass data mining with traditional Apriori algorithm can waste a lot of storage space and communication resources, resulting in inefficient algorithm, therefore, the MapReduce algorithm under Aprioi Framework is proposed to improve the method, Firstly, the MapReduce database is divided into n separate blocks by horizontal partition method, and then it is sent to the M work node with dynamic load balancing. Each node scans its own block of data to produce a local candidate frequent itemsets,...

Study on Load Balancing optimization in MapReduce

Study on Load Balancing optimization in MapReduce Hong Min Lau Zhao Liu Yuanyuan Hong data Analysis and processing is an important task in large-scale distributed data processing applications. Because of its simplicity and flexibility, the MapReduce programming model is becoming the core model of large-scale distributed data processing systems such as Hadoop systems. Because the data being processed may not be evenly divided, the MapReduce programming model may have data skew problems when it handles connection operations. Data skew problem severely reduces mapreduce execution ...

NAEPASC: A novel and efficient public auditing mechanism for cloud data

NAEPASC: A novel and efficient research on cloud data public auditing Objective: With the extensive depth of cloud computing, more and more users choose cloud to store data. The integrity of the data in the cloud is difficult to determine because the user may not be able to save any copies of the data locally. In addition, the same user may need to store multiple copies of data into the cloud, so simplifying key management is also a key issue. This paper attempts to design an identity-based data integrity verification mechanism that satisfies the cloud storage environment to detect the correctness of the data in the cloud. Innovative points: Refer to the identity-based signature mechanism and propose a ...

Research on reusable component for multi-tenant SLA in PAAs platform

Research on reusable component for multi-tenant SLA in PAAs platform Zhang Zhenchu first, in order to integrate the reusable component mechanism into the PAAs platform, this paper proposes a kind of enhanced e-commerce PAAs platform, which adds a layer of component layer to the PAAs platform. The component layer is responsible for the storage and invocation of the component. Secondly, in order to better manage component developers to submit a large number of components and facilitate the SaaS application developers to retrieve their own required components, this paper proposes the concept of abstract components. Abstract component is the generalization of the component with similar function, it ...

Running Hadoop on Ubuntu Linux (Single-node Cluster)

What we want to does in this short tutorial, I'll describe the required tournaments for setting up a single-node Hadoop using the Hadoop distributed File System (HDFS) on Ubuntu Linux. Are lo ...

Running Hadoop on Ubuntu Linux (multi-node Cluster)

What we want to does in this tutorial, I'll describe the required tournaments for setting up a multi-node Hadoop cluster using the Hadoop Distributed File System (HDFS) on Ubuntu Linux. Are you looking f ...

Archie architecture and practice of large data platform in Hadoop ecosystem

Archie on the architecture and practice of large data platform in Hadoop ecosystem Archie Technology Product Center Sun--Archie Art--hadoop Ecology--deployment architecture--operation--problem--related development Archie Art in Hadoop ecology large data platform architecture and practice

The integrated design and thinking of the "research and construction pipe" of Army health Cloud

The integrated design and thinking of "research and construction pipe" of Army Health cloud research and development of military health cloud second, military health cloud planning and design three, military health cloud development and deployment of military health cloud construction five, military health cloud training application six, Army health cloud operation and maintenance management troops health cloud "research and build pipe" integrated design and thinking

Network traffic detection using cloud computing technology

Using cloud computing technology to detect the network traffic Xiaoping Wang Jianyong Yang Yi in order to realize real-time and effective detection of large data network traffic, a cloud based network traffic detection scheme is proposed. This scheme takes full advantage of the Map/reduce programming model of Hadoop platform in the mass data processing, adopts the layered design idea, overcomes the disadvantage that the traditional detection scheme is inefficient, scalable and insecure in the massive data application environment. Chongqing Mobile DPI Platform application shows that the scheme is more effective, the flow detection effect is good, in large data ...

"Graphics" distributed parallel programming with Hadoop (ii)

program example and Analysis Hadoop is an open source distributed parallel programming framework that realizes the MapReduce computing model, with the help of Hadoop, programmers can easily write a distributed parallel program, run it on a computer cluster, and complete the computation of massive data. In this article, we detail how to write a program based on Hadoop for a specific parallel computing task, and how to compile and run the Hadoop program in the ECLIPSE environment using IBM MapReduce Tools. Preface ...

Analysis of cloud-based translation model from framework theory

The analysis of cloud-based translation model from the framework theory Cao Ni of Jilin University This paper analyzes the cloud based translation model from the perspective of framework theory of cognitive linguistics. Although the research results of framework theory have been applied to the field of translation research, there are few researches on the application of frame theory to machine translation. Cognitive linguistics holds that the framework is a knowledge system which is stored in people's memory to express the objective reality. People mainly rely on their own cognitive framework to achieve the transmission and understanding of language information. Therefore, the process of translation is essentially the process of cognition, that is, people in the process of searching ...

Large Data/small algorithm

Large data and small algorithm-practical user behavior Research methods Eric Shinshing 1. Data analysis values 2. Large data and small algorithms 3. Puzzle of database technology large data and small algorithm

Design of a cloud service PAAs model for Chinese Medicine Atlas file in large data age

Design of a cloud service PAAs model for Chinese Medicine atlas in large data age Ye Shaoxia Chen Qin Jia Jia Wei Ai by analyzing the IO transmission defect of IaaS virtualization platform, combining with the large number of documents in the research of Traditional Chinese Medicine Atlas, the paper studies the limitation of the Chinese Medicine Atlas file system to the virtual platform migration, In contrast to existing solutions, a file PAAs model, which splits file IO load from the application system by means of a business layering approach, is proposed to decompose the document IO-intensive application into: UI service layer ...

Similarity Chinese character recognition based on deep neural network in large data

Similarity Chinese character recognition based on deep neural network in large data Charles Tau Zhang Shuye Jin Lianwen The traditional handwritten Chinese character recognition system (SHCCR) is limited by the feature extraction method, and a deep neural network (DNN) is used to identify the automatic learning of similar Chinese characters. This paper introduces the method of similar character set generation and the specific structure of deep neural network for similar Chinese character recognition, and studies the influence of different training data scale on recognition performance. Experiments show that DNN can effectively carry out feature learning, avoid the lack of artificial design features, and traditional based on gradient features ...

Large data and earthquake social services

Large data and seismic social services Weidong Zhang Yijun Zhaojuiru Chen Huizhong earthquake Operations, the number of data is increasing, no matter what technology we adopt, we are deeply concerned that we are currently encountering the largest and most complex data sets encountered since Seismology and may present new challenges to our traditional systems for seismic data analysis.  What is the big data, how to apply large data technology to solve and deal with the problems in the earthquake business, it is worth our deep thinking and discussion. Large data and earthquake social services

Hadoop raises big data revolution three giants Qi exerting force

Introduction: Open source data processing platform with its low-cost, high scalability and flexibility of the advantages has won the majority of network Giants recognized. Now Hadoop will go into more business. IBM will launch a DB2 flagship database management system with built-in NoSQL technology next year. Oracle and Microsoft also disclosed last month that they plan to release a Hadoop-based product next year. Two companies are planning to provide assistance with deployment services and enterprise-level support. Oracle has pledged to preinstall Hadoop software in large data devices. Big Data Revolution ...

Private Cloud Trilogy

Part One: It blooms at the very beginning of the private cloud, virtualization is implemented primarily within IT departments, such as developing systems, testing systems, and some IT departments ' own applications. This stage, the main harvest of enterprises is to reduce costs. The primary object of virtualization is the server, where storage consolidation, increased efficiency, and simplified management are the primary requirements of the store.      The proportion of virtualization can reach around 40% of the entire IT system. Liaoning Mobile first to the management center of the data center of the virtualization transformation, the construction of standardization, automation and virtualization of a new ...

Total Pages: 2156 1 .... 1327 1328 1329 1330 1331 .... 2156 Go to: GO

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.