HDFs File Upload: 8020 port denied connection problem solved!

Source: Internet
Author: User
Tags zookeeper hadoop fs

HDFs File Upload: 8020 port denied connection problem solved!

Copyfromlocal:call to localhost/127.0.0.1:8020 failed on connection exception:java.net.ConnectException

The problem indicates that the 8020 port of this machine cannot be connected.

The network above found an article is to change the configuration port inside the Core-site.xml to 8020, but we still use his default 9000 port, only need to configure eclipse when the port modified to 9000.

My question is over, but he wrote a few other questions down there and pasted it down, making a search for backup.

Source: http://www.csdn123.com/html/itweb/20130801/34361_34373_34414.htm

Key configuration files for Hadoop
When HDFs uploads a file successfully, it begins to curiously modify its configuration file. Because before himself for Hadoop has been a collision, lack of understanding and understanding of the system, on the Internet to find a Hadoop configuration file information, so began to move hands, do not want to appear a new error. Let's start by combing the Hadoop configuration file:
1. hadoop-env.sh
Hadoop's run environment configuration, mainly set Hadoop_home and java_home Two environment variables, specify their path to
2. Core-site.xml
Note that the Fs.default.name property is configured correctly, this property is used to configure the Namenode node, and we all know that a Hadoop system typically has only one namenode node managing all the Datanode, so the settings must be correct: hdfs:// localhost:8020. The general default is 9000 ports, but my own Ubuntu does not work properly, so it is changed to 8020. Port 8020 is the RPC call port of Namenode for Hadoop.
3. Hdfs-site.xml
The Dfs.replication property, as the name implies, indicates the number of backups for the specified Hadoop file block, which is typically 3 copies and can be set to 1
The Dfs.name.dir property, which is important to set the directory where the Namenode data is stored, can cause Namenode to fail if the directory access fails
The Dfs.data.dir property, which specifies the directory where data is stored locally on the Datanode, is independent of the settings of the Namenode
4. Mapred-site.xml
The Mapred.jop.tracker property is used to set the host, IP address, and port of the Jobtracker, which can be set to: localhost:9001
The configuration file for the HBase system mainly needs to be noted in the following sections:
1. hbase-env.sh
Set the environment variable, to export the Java path, the last line has a property hbase_manages_zk set to True to enable the self-brought ZK, otherwise the runtime will prompt to open ZK error, you can also use apt-get separate zookeeper run.
2. Hbase-site.xml
The Hbase.rootdir property sets the shared directory for Region server, which is written to TMP by default, and the data is lost after reboot, I set it to Hdfs://localhost:8020/hbase
Zookeeper.znode.parent represents hbase with Znode, which generally defaults to/hbase

When you finish modifying the configuration file, when you start Hadoop, you find that Namenode cannot start, and using Hadoop Namenode-format is not successful:

Which prompts "Cannot create directory/hdfs/name/current". There are generally two reasons for this error, one is that there is an error in the path setting, and the other is more likely due to a permissions problem. Your own namenode in the specified storage path used "~" to represent the home directory, but it turns out that it does not work, you must use absolute path, your own modification to/home/hadoop/hdfs/; and then make sure that the Home property is 775, The properties of Hadoop are 775.
650) this.width=650; "src=" Http://www.csdn123.com/attachment/201308/1/26275986_1375337683KEv8.png "width=" 700 " height= "548"/>
The Hadoop fs-ls [directory] here can list directory contents, and Hadoop FS-LSR can recursively list subdirectories of the directory. -mkdir and-RMR are used to create and delete directories, respectively. We can then use the Hadoop fs-puts src hdfs://localhost:8020/user/hadoop/img to copy our own IMG files and JSON files.
650) this.width=650; "src=" Http://www.csdn123.com/attachment/201308/1/26275986_13753379240T5p.png "width=" 700 " height= "312"/>
Here is also a small episode, oneself in resolving Namenode file creation failure problem, found the default Hadoop storage Namenode and Data directory, under/tmp/hadoop-hadoop, of course, After setting the Dfs.name.dir, it will be stored in the specified directory. The HDFs file is stored in the actual node-local system, but with a special encoding, with its own file system tree structure, it is generally not possible to use the CD command to enter the view. Uploading a file is the addition of your own files into the HDFs tree.
650) this.width=650; "src=" Http://www.csdn123.com/attachment/201308/1/26275986_13753380539XUw.png "width=" 700 " height= "555"/>

Because the configuration often need to modify the file, do not know when accidentally caused configuration errors, so regular backup is a good habit, you can use the tar command of Ubuntu to implement the backup, generated backup.tgz and put in/down:
Tar-cvpzf backup.tgz--exclude=/proc--exclude=/backup.tgz--exclude=/lost+found--exclude=/mnt--exclude=/sys/



This article is from the "Silent Chen Gui" blog, please make sure to keep this source http://snaile.blog.51cto.com/8061810/1563875

HDFs File Upload: 8020 port denied connection problem solved!

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.