HDFs File Upload: 8020 port denied connection problem solved!Copyfromlocal:call to localhost/127.0.0.1:8020 failed on connection exception:java.net.ConnectExceptionThe problem indicates that the 8020 port of this machine cannot be connected.The network above found an article is to change the configuration port inside the Core-site.xml to
Let's take a look at hadoop the definitive guid tonight. In chapter 3, we will talk about HDFS. I will follow his example to get it done in eclipse.Code, The execution has always reported an error, said not to connect localhost: 8020, I checked the configuration file, it does not have the configuration of this port, but the core-site.xml has a HDFS: localhost: 9000, I think Port 8020 will be enabled by defa
Tonight look at the Hadoop the definitive GUID, to the third chapter about HDFs, I follow his example, in Eclipse get good code, execution has been error, said not even localhost:8020, I checked the next configuration file, There is really no configuration of this port, but core-site.xml inside there is a hdfs:localhost:9000, see there is no way to try the idea, that will default to start 8020 port, is not
80% of Software defects often exist in 20% of the software space.This principle tells us that if you want to make software testing effective, remember to visit its high-risk areas frequently ". There are many possibilities for discovering Software
case to create a directory on the HDFs file system: Package Org.zero01.hadoop.hdfs;import Org.apache.hadoop.conf.configuration;import Org.apache.hadoop.fs.FileSystem ; Import Org.apache.hadoop.fs.path;import Org.junit.after;import org.junit.before;import Org.junit.Test;import java.net.uri;/** * @program: Hadoop-train * @description: Hadoop HDFS Java API operation * @author: @create: 2018-03-25 13:5 9 **/public class Hdfsapp {//HDFS file System server address and port public static final String
key dfs. ha. fencing. ssh. private-key-files used for ssh communication and the connection timeout time.
$ hdfs-site.xml dfs.nameservices myhdfs dfs.ha.namenodes.myhdfs nn1,nn2 dfs.namenode.rpc-address.myhdfs.nn1 debugo01:8020 dfs.namenode.rpc-address.myhdfs.nn2 debugo02:8020 dfs.namenode.http-address.myhdfs.nn1 debugo01:50070 dfs.namenode.http-address.myhdfs.nn2 debugo02:50070 dfs.namenode.sh
Java.util.concurrent.ThreadPoolExecutor.runWorker (threadpoolexecutor.java:1145)At Java.util.concurrent.threadpoolexecutor$worker.run (threadpoolexecutor.java:615)At Java.lang.Thread.run (thread.java:745)
Can't upload compressed file, file name problem file, estimate video file is even worse
16/06/26 18:18:59 INFO IPC. Nettyserver: [id:0x6fef6466,/192.168.184.188:40594 =>/192.168.184.188:44444] CONNECTED:/192.168.184.188:4059416/06/26 18:19:05 INFO HDFs. Hdfsdatastream:serializer = TEXT, Use
About Java.io.IOException:Call to master/10.200.187.77:8020 failed on the local exception: Java.nio.channels.ClosedByInterruptException abnormal problem,
Recently changed the Hadoop log to debug mode for careful analysis, the following log error information interception section:
Note the Red Font section: This information closes the 77:40,956 connection, causing the back write to be thrown unexpectedly.
2012-06-28 14:12:20,433 DEBUG Org.apache.hadoop.
using Hive 1.X releases. Query ID = root_20161226235345_5b3fcc2b-7f90-4b10-861f-31cbaed8eb73 Total jobs = 1 Launching Job 1 out of 1 number of redu Ce tasks not specified. Estimated from-input data size:1 in order-to-change the average-load for a-reducer (in bytes): Set Hive.exec.reducers.byt
Es.per.reducer=
querying data in an index table
Hive> Select*from index_table_student;
OK
1 hdfs://liuyazhuang121:8020/opt/hive/warehouse/student/sutdent.txt
/ Mahout0.9/mahout-core-0.9-job.jar!/org/slf4j/impl/staticloggerbinder.class]slf4j:see http://www.slf4j.org/ Codes.html#multiple_bindings for an explanation. Slf4j:actual binding is of type [org.slf4j.impl.log4jloggerfactory]14/12/0600:10:32 INFO common. Abstractjob:command line Arguments:{--booleandata=[true],--endphase=[2147483647],--input=[hdfs:// 192.168.1.170:8020/USER/ROOT/USERCF],--maxprefsinitemsimilarity=[500],--maxprefsPERUSER=[10],--maxsimi
: displays help information??
Appendix 6: Upload to destinationHadoop FS-Put/home/wx/desktop/winevt HDFS: // master: 8020/Hadoop FS-ls HDFS: // master: 8020/winevtTo delete an object first, runHadoop FS-rm-r HDFS: // master: 8020/winevtIf the command is automatically executed, add the command to the/etc/crontab file in the format to make it run automatically on
, you can use stop-hbase.sh to stop hbase 7. Enter HBase Shell to view, insert data
]# HBase Shell
Ways to use the hbase shell under the shell:
Echo "Scan ' test123 '" |hbase shell >123.txt
This command does not have to start the hbase shell, and the table scan out into the 123.txt
hbase Issue highlights: 1. Unable to connect to 8020 ports
After the start of the start-hbase.sh, the Namenode on the Hmaster started and then quickly exite
MapReduce implements matrix multiplication-implementation code
Previously I wrote an article on how MapReduce implements the Matrix Multiplication Algorithm: Mapreduce implements the algorithm idea of matrix multiplication.
To give you a more intuitive understanding of program execution, we have compiled the implementation code for your reference.
Programming Environment:
Java version "1.7.0 _ 40"
Eclipse Kepler
Windows 7 x64
Ubuntu 12.04 LTS
Hadoop2.2.0
Vmware 9.0.0 build-812388
Input d
, $routeParams, $filter) { at Console.log ($routeParams); - -});If you only use routing, the above code is sufficient. it will guarantee;1. When the page stays on the homepage or other strange places, it automatically jumps to/allAbove, the URL is--http://127.0.0.1:8020/finishangularjs-mark2/index.html#/allJust pay attention to the back of index.2. When the page jump direction is/other, jump toHttp://127.0.0.1:
);
});
If you use routing alone, the above code will suffice. it will guarantee;
1. When the page stays on the homepage or other strange place automatically jumps to
/allAbove, the URL is--http://127.0.0.1:8020/finishangularjs-mark2/index.html#/allJust pay attention to the back of the index.
2. When the page jump direction is/other, jump to
Http://127.0.0.1:8020/finishAngularJS-mark2/iother.html
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.