Convert data into gold hadoop video success 03

Source: Internet
Author: User

Video address: http://pan.baidu.com/s/1dDEgKwD

Focuses on HDFS

I ran the wordcount sample program and tried it again (using pseudo-distributed)

1. Set up data (unlike the operations performed by the instructor, but I believe in myself)

2. Run the wordcount Program

3. view results

(It can be seen that as long as there is no space, it is regarded as a word)

 

Next we will introduce how to view the task and HDFS status in versions 50030 and 50070.

......

If you want to view the log, you can enter

HTTP: /localhost: 50070/logs/

HTTP: /localhost: 50030/logs/

View stack information

Http: // localhost: 50030/stacks

 

It can no longer be modified, but it is absolutely correct, because the subsequent modification is to delete and rewrite

 

 

WhereRack PolicyIt looks amazing. I know the result of rack sensing, but I don't know what strategy it uses to perceive. Maybe it's context and bandwidth, and the lecturer doesn't know it. Keep it for future exploration.

As with windows, deleted files are not deleted immediately. It is automatically deleted when it reaches a certain capacity.

Snapshot MechanismThe lecturer said that 0.20.2 has not been implemented and will be implemented in the future.

 

HDFS file operations include command line and API (APIs are used for Java code, for example)

Several command lines

......

View statistics hadoop dfsadmin-Report

 

Next we will discuss how to addNew node ModeBut I thought he was wrong or not all. I checked the Internet and wrote the experiment in detail.

Server Load balancer (HDFS storage)

The lecturer gave me some dumb.

This script is run without hadoop. The actual situation is like this.

 

At the beginning, you can reduce the Log Level of log4j to info or debug, so that more information is displayed.

 

Why does the log become the most popular among hadoop projects?

One write, no change, can only be used for analysis

 

How many file commands are in the statistics directory in Linux?

Ls | WC-l

Functions of the shuffle Process

1. compress files to improve file transmission efficiency;

2. share part of reduce work.

 

A lot of Mr programs have to do jobs like split and sort, so hadoop separates them and encapsulates them into a component.

You do not have to write it in each Mr program.

 

Mr program submission or task submission can be performed on any Cluster machine, not on namenode.

That is to say, the client can be datanode or namenode.

Starting JVM is a waste of time and resources, so it is reused by JVM.

Why does namenode need a format?

Formatting is different from formatting the disk file system. Is to initialize metadata file system information, and create directories such as current in the corresponding directory.

What should I do if hadoop data is not modified and in_use.lock is required?

Lock the directory to prevent concurrent write conflicts in the directory

 

Convert data into gold hadoop video success 03

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.