In-depth introduction to hadoop development examples video tutorial

Source: Internet
Author: User

Hadoop instance video tutorial-in-depth development of hadoop
What is hadoop, why learning hadoop?
Hadoop is a distributed system infrastructure developed by the Apache Foundation. You can develop distributed programs without understanding the details of the distributed underlying layer. Make full use of the power of clusters for high-speed computing and storage. Hadoop implements a Distributed File System (HDFS. HDFS features high fault tolerance and is designed to be deployed on low-cost (low-cost) hardware. It also provides high throughput to access application data, suitable for applications with large data sets. HDFS relaxed (relax) POSIX requirements (requirements) so that you can access the data in the streaming Access File System in the form of a stream.
Hadoop is a software framework that can process large amounts of data in a distributed manner. However, hadoop is processed in a reliable, efficient, and scalable manner. Hadoop is reliable because it assumes that computing elements and storage will fail, so it maintains multiple copies of work data to ensure that it can be redistributed for failed nodes. Hadoop is efficient because it works in parallel and accelerates processing through parallel processing. Hadoop is still scalable and can process petabytes of data. In addition, hadoop depends on Community servers, so it is relatively low cost and can be used by anyone.
Hadoop has a framework written in Java, so it is ideal to run on the Linux production platform. This course uses the Linux platform to simulate and explain the actual situation.
Hadoop instance video tutorial-in-depth development of hadoop practice: http://www.ibeifeng.com/goods-254.html
In-depth introduction to hadoop development video tutorial highlights:
Highlight 1: comprehensive technical points and sound system
This course extracts the most applied, deepest, and practical technologies in actual development on the premise that the hadoop course knowledge system is perfect. Through this course, you will reach a new high in technology and enter the beautiful world of cloud computing. In terms of technology, you will have a thorough understanding of basic hadoop clusters, hadoop HDFS principles, hadoop HDFS basic commands, namenode working mechanisms, HDFS basic configuration management, mapreduce principles, and hbase system architecture; hbase table structure; How to Use mapreduce in hbase; mapreduce advanced programming; split implementation; hive entry; hive combined with mapreduce; hadoop cluster installation and many other knowledge points.
Highlight 2: Basic + practice = application, both learning and practice
Practical application projects are arranged at each stage of the course, so that students can quickly master the application of knowledge points. For example, in the first stage, the courses are combined with HDFS applications, this article explains the design of image server and how to use Java APIs to operate HDFS in the second stage. The course integrates hbase to implement various functions of Weibo projects, so that students can learn and use it live. In the third stage, hbase and mapreduce work together to implement the single-statement query and statistics system. In the fourth stage, the hive practices are based on the actual data statistics system, this allows students to master advanced hive applications in the shortest time.
Highlight 3: Rich Operation Experience of the Telecom Group Cloud Platform
Lecturer Roby has rich working experience in China Telecom Group, is currently responsible for all aspects of the cloud platform, and has many years of internal enterprise training experience. The lecture content is completely close to the enterprise's needs and will never be discussed on paper.
For more technical highlights of the hadoop development video tutorial, refer to the course outline: (this outline should be named in the form of chapters to prevent some chapters from having more than one lesson in chapter 1)
1st chapter:
> Hadoop background
> HDFS design goals
> Scenarios where HDFS is not suitable
> Detailed analysis of HDFS Architecture
> Basic principles of mapreduce
Section 2nd
> Hadoop version Introduction
> Install the standalone version of hadoop
> Install a hadoop Cluster
Section 3rd
> Basic HDFS command line operations
> Working Mechanism of namenode
> HDFS basic configuration management
Section 4th
> HDFS Application Practice: image server (1)-System Design
> Build a PHP + Bootstrap + Java application environment
> Use hadoop Java API to write files to HDFS
Section 5th
> HDFS Application Practice: image server (2)
> Use the hadoop Java API to read files in HDFS
> Use the hadoop Java API to obtain the HDFS directory list
> Use the hadoop Java API to delete files in HDFS
Section 6th
> Basic principles of mapreduce
> Mapreduce running process
> Build a Java development environment for mapreduce
> Use the Java interface of mapreduce to implement wordcount
Section 7th
> Wordcount Operation Process Analysis
> Mapreduce combiner
> Implement data deduplication using mapreduce
> Use mapreduce to sort data
> Use mapreduce to calculate the average data score
Section 8th
> Hbase details
> Hbase System Architecture
> Hbase table structure, rowkey, column family, and timestamp
> Master, region, and Region server in hbase
Section 9th
> Use hbase to implement Weibo applications (1)
> User Registration, login, and cancellation Design
> Build the environment struts2 + JSP + Bootstrap + jquery + hbase Java API
> Hbase and user-related table Structure Design
> User Registration implementation
Section 10th
> Use hbase to implement Weibo applications (2)
> Use session for user logon and logout
> "Follow" Function Design
> Follow Table Structure Design
> "Follow" function implementation
Section 11th
> Use hbase to implement Weibo applications (3)
> "Weibo" Function Design
> Table structure design of the "Weibo" Function
> "Weibo" function implementation
> Display the running of the entire application
Section 12th
> Introduction to hbase and mapreduce
> How does hbase use mapreduce?
Section 13th
> Hbase Application Practice: Ticket query and Statistics (1)
> Overall application design
> Development Environment setup
> Table Structure Design
Section 14th
> Hbase Application Practice: Ticket query and statistics (2)
> Design and Implementation of the Bill receiving ticket
> Design and Implementation of ticket Query
Section 15th
> Hbase Application Practice: Ticket query and Statistics (3)
> Statistical function design
> Statistical function implementation
Section 16th
> In-depth mapreduce (1)
> Implementation of split
> Custom input implementation
> Instance description
Section 17th
> In-depth mapreduce (2)
> Reduce Partition
> Instance description
Section 18th
> Hive getting started
> Install hive
> Use hive to store structured data to HDFS
> Basic use of hive
Section 19th
> Use MySQL as the hive metabase
> Combining hive with mapreduce
Section 20th
> Hive Application Practice: Data Statistics (1)
> Application design and Table Structure Design
Section 21st
> Hive Application Practice: data statistics (2)
> Data entry and statistics Implementation

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.