The company's recent spark cluster was migrated from the original standalone to spark on yarn, when migrating related programs, found that the adjustment is still some, the following is a partial shell command submitted in two versions, from the command can see the difference, the difference is mainly spark on Yarn does not work the same way, resulting in a different way of submitting it.
The script for the standalone mode is:
<textarea readonly="" name="code" class="java">spark-submit--class Com.bg.tools.WapJoinUA--driver-memory 80g--executor-memory 60g--total-executo R-cores ${common_path}/jars/${app_jar} ${hdfs_uatag_input} ${hdfs_instance_input} ${TAGS_FILTER_FILE} ${HD Fs_output}</textarea>
The script for the Yarn-cluster mode is:
<textarea readonly="" name="code" class="java">spark-submit--queue DC--class bg.tools.Instance--master yarn-cluster--executor-memory 30g --driver-memory 10g--num-executors--executor-cores ${common_path}/jars/${app_jar} ${hdfs_instance_input} ${hdfs_output}</textarea>
One of the problems encountered is that the method of file read and write inconsistent, standalone because driver is fixed, read files similar to local read, but Yarn-cluster driver is yarn to allocate, need to use--files to upload files, And when reading the file, you should only use the name of the file instead of the full path of the file name, or you will throw the file can not find the exception, but also more useful is this option:--conf " spark.hadoop.mapreduce.input.fileinputformat.split.minsize=1073741824 "General Hadoop default block is 64M, this can adjust the size of split, So as not to cut into too many small files.
Spark Notes (i) Partial differences between stand alone and Yarn-cluster