Distributedcache in Hadoop
1. Distributedcache in Hadoop
This article is mainly the follow-up of the previous article, mainly on the principle and application of the distributed cache mechanism of Hadoop.
Distributed caches are called Distributedcache in MapReduce, which facilitates the sharing of information between map tasks or the reduce task, as well as the addition of third-party packages to their classpath paths. Hadoop distributes the cached data to all the ready-to-start nodes of the cluster and replicates to the directory configured in Mapred.temp.dir.
2, the use of Distributedcache
The essence of Distributedcache's use is actually adding properties in Configuraton: Mapred.cache. {files|archives}. If the diagram is convenient, you can use the static method of the Distributedcache class.
The method of not convenient:
Conf.set ("Mapred.cache.files", "/data/data"); Conf.set ("Mapred.cache. Archives ","/data/data.zip "); |
Easy method:
Distributedcache. Addcachefile (URI, Configuration) Distributedcache.addarchivetoclasspath (Path, Configuration, FileSystem) |
It is important to note that the preceding lines of code need to be written before the job class is initialized, otherwise the file cannot be found in the run-time, because the job initializes a copy of the incoming configuration object to Jobcontext.
The org.apache.hadoop.mapreduce after the 0.21 version of MapReduce is moved to the org.apache.hadoop.mapred package. However, the Configure method provided in the document is overridden in Mapreducebase, and the new version of map inherits from Mapper,reduce inherited from Reducer, so the Configure method is changed to setup. To obtain the cache data, you have to obtain the cache data in the Setup method in the Map/reduce task, and then proceed accordingly:
@Override protected void Setup (context context) throws IOException, Interruptedexception {super.setup (context); uri[] URIs = distributedcache.getcachefiles (context. GetConfiguration ()); path[] paths = distributedcache.getlocalcachefiles (context. GetConfiguration ()); TODO} |
The use of the three-party library is slightly simpler, just upload the library to HDFs, then add the code to the CLASSPATH:
Distributedcache.addarchivetoclasspath (New Path ("/data/test.jar"), conf); |
3, the use of symlink
Symlink is actually a shortcut to the HDFs file, just add #linkname after the pathname, and then use linkname in the task to use the corresponding file as follows:
Conf.set ("Mapred.cache.files", "/data/data#mdata"); Conf.set ("Mapred.cache. Archives ","/data/data.zip#mdatazip "); |
@Override protected void Setup (context context) throws IOException, Interruptedexception {super.setup (context); FileReader reader = new FileReader (New File ("Mdata")); BufferedReader breader = new BufferedReader (reader); TODO} |
Before using symlink, you need to inform Hadoop, as follows:
Conf.set ("Mapred.create.symlink", "yes"); Yes, not true |
Distributedcache.createsymlink (Configuration) |
4. Precautions
1) cache files (data, three-party library) to be uploaded to HDFs, can be used;
2) when the cache is small, it is recommended to read all the data into the corresponding node memory to improve the access speed.
3) The cache file is read-only and cannot be modified. To modify the output again, enter the new output file as the new cache into the next iteration.