Video address: http://pan.baidu.com/s/1dDEgKwD
Focuses on HDFS
I ran the wordcount sample program and tried it again (using pseudo-distributed)
1. Set up data (unlike the operations performed by the instructor, but I believe in myself)
2. Run the wordcount Program
3. view results
(It can be seen that as long as there is no space, it is regarded as a word)
Next we will introduce how to view the task and HDFS status in versions 50030 and 50070.
......
If you want to view the log, you can enter
HTTP: /localhost: 50070/logs/
HTTP: /localhost: 50030/logs/
View stack information
Http: // localhost: 50030/stacks
It can no longer be modified, but it is absolutely correct, because the subsequent modification is to delete and rewrite
WhereRack PolicyIt looks amazing. I know the result of rack sensing, but I don't know what strategy it uses to perceive. Maybe it's context and bandwidth, and the lecturer doesn't know it. Keep it for future exploration.
As with windows, deleted files are not deleted immediately. It is automatically deleted when it reaches a certain capacity.
Snapshot MechanismThe lecturer said that 0.20.2 has not been implemented and will be implemented in the future.
HDFS file operations include command line and API (APIs are used for Java code, for example)
Several command lines
......
View statistics hadoop dfsadmin-Report
Next we will discuss how to addNew node ModeBut I thought he was wrong or not all. I checked the Internet and wrote the experiment in detail.
Server Load balancer (HDFS storage)
The lecturer gave me some dumb.
This script is run without hadoop. The actual situation is like this.
At the beginning, you can reduce the Log Level of log4j to info or debug, so that more information is displayed.
Why does the log become the most popular among hadoop projects?
One write, no change, can only be used for analysis
How many file commands are in the statistics directory in Linux?
Ls | WC-l
Functions of the shuffle Process
1. compress files to improve file transmission efficiency;
2. share part of reduce work.
A lot of Mr programs have to do jobs like split and sort, so hadoop separates them and encapsulates them into a component.
You do not have to write it in each Mr program.
Mr program submission or task submission can be performed on any Cluster machine, not on namenode.
That is to say, the client can be datanode or namenode.
Starting JVM is a waste of time and resources, so it is reused by JVM.
Why does namenode need a format?
Formatting is different from formatting the disk file system. Is to initialize metadata file system information, and create directories such as current in the corresponding directory.
What should I do if hadoop data is not modified and in_use.lock is required?
Lock the directory to prevent concurrent write conflicts in the directory
Convert data into gold hadoop video success 03