First, the cluster was originally configured:
Slave name sparkmaster,ubuntu12.04-32, username root, memory 4g (only for task scheduling and assignment, not for compute nodes)
From machine name sparkslave1,ubuntu12.04-32, username root, memory 4g (compute node)
From machine name sparkslave2,ubuntu12.04-32, username root, memory 1.7g (compute node)
Second, the expansion of the reason: the amount of data increase, the original two working nodes can not meet the real-time demand, because the laboratory computing resources are limited, so the original scheduling node is also added to the compute node, that is, the expansion of the sparkmaster is both the scheduling node and the compute node.
Third, modify the configuration process: cd/usr/local/spark/spark-1.0.2-bin-hadoop2/conf
Vim slaves Modify the slaves file, the original only sparkSlave1, sparkSlave2 content to add Sparkmaster, save exit;
(It would have been just one step above, but due to the spark_worker_memory=1g in the spark-env.sh, too little memory would directly affect the capacity of the memory calculation, causing large files to read and write to disk frequently and consume a lot of time, So I want to modify it to sparkSlave2 maximum memory capacity 1.6g (spark_worker_memory by the minimum value of node memory in the cluster, that is, "cask principle"), modify spark-env.sh file, save exit)
Four, the error phenomenon: Master can start normally, but the worker starts the Times wrong:
V. Error ANALYSIS: Analyzing its log file cat/usr/local/spark/spark-1.0.2-bin-hadoop2/sbin/. /logs/spark-root-org.apache.spark.deploy.worker.worker-1-zhangbo.out found to be spark_worker_memory=1.6g caused by a data format error, You cannot use a floating-point number, only integer, and the spark cluster starts normally after modifying the original value:
There are three worker nodes that can be seen through the console, indicating that the cluster is expanding successfully:
Dragon Walker-spark Study notes "three" working experience of worker node expansion in spark cluster