Spark-shell-Initiated problem solving
Today in the start of the Spark-shell model of the wonderful appearance of the following several problems
1 Port occupancy: 4040 Port is occupied, we know that 4040 is the worker's port use, but is occupied, as shown in the following figure:
Use the following command: Netstat-ap |grep 4040
TCP6 0 0 [::]:4040 [::]:* LISTEN 2456/java
The port was found to be occupied by process ID 2456. Kill him.
Kill-9 2456
2) due to insufficient disk space, low memory, system power off and other causes Datanode DataBlock lost. Thus entering safe mode, use the following command to Bin/hadoop Dfsadmin-safemode leave. Then continue to scan the disk, remove the bad block Bin/hdfs fsck/-delete can be used after.
Minimally replicated blocks:0 (0.0%)
over-replicated blocks:0 (0.0%)
under-replicated blocks:0 (0.0%)
mis-replicated blocks:0 (0.0%)
Default Replication Factor:3
Average Block replication:0.0
Corrupt blocks:43
Missing replicas:0
Number of data-nodes:0
Number of racks:0
FSCK ended at Fri 20:51:22 PDT 2016 in milliseconds