Spark Data locality

Source: Internet
Author: User

SparkData Locality

The essence of a distributed computing system is mobile computing rather than moving data, but in the actual computation, there is always a case of moving the data, unless a copy of the data is saved on all nodes of the cluster. Mobile data, moving data from one node to another node for calculation, not only consumes the network IO, but also consumes the disk io, reducing the overall efficiency of the calculation. In order to improve the local nature of the data, in addition to the optimization algorithm ( that is, modify spark memory, a little bit more difficult), is reasonable to set up a copy of the data. Sets a copy of the data, which requires an empirical value to be acquired by configuring the parameters and observing the running state over a long period of time.

There are three data-native types in Spark:

    • Process_local refers to reading the data cached on the local node
    • Node_local refers to reading local node hard disk data
    • Any refers to reading non-local node data

Typically read dataProcess_local>node_local>any, try to make the dataprocess_localornode_localmode read. Whichprocess_localalso andCacheabout, ifRDDoften use the words will theRDD Cacheinto memory, note that becauseCacheis aLazy, so must pass aActionof the trigger, in order to really put theRDD Cacheinto memory.

Recently, in a text-matching experiment, it was found that the Locality level of the processed data was any , causing the data to be transmitted over the network, resulting in inefficiency and later discovery:

the worker Id in Spark and the IP addresses used in address as worker , while the HDFS cluster generally takes hostname as the slave the identity , so that theSparkfromHDFSThe save location for the get file in ishostname, whileSparkof their ownWorkeridentified asIPaddress, the two are different, so there is no taskLocality Levelmarked asnode_local,but any.

Workaround: In Standalone mode, start each Worker node separately, as shown in the following command:

$SPARK _home/sbin/start-slave.sh-h

Example:start-slave.sh-h slave1 spark://master1:7077

Suppose I start the Worker nodeon slave1 ,master1 is the primary node

Hostname is the Worker 's hostname , or slave1, that initiates MasterUrl is "spark://master1:7070"

Spark Data locality

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.