This time to do an IP segment matching the MapReduce program, recorded, some problems encountered, this time a total of several attempts:
The login log is all added to the memory, then the total data in the Myinputformat processing and partitioning, first statistics the total amount of data, and then the data roughly evenly divided into several split. Then in the map of the setup () method to read the data of the small table IP segment, put into the list, at this time in the map function to read the message, the matching algorithm. But this, when the amount of data is small, when the amount of data is large, it will be reported memory overflow.
First in the InputFormat all the re-weight after the UserID query, distributed to the different split as the map key, and then in each mapper, the Setup method to read the small table, in the map method to remove the current key of all login records, This record is compared with the IP segment data to produce the result. This method with the above common problem, loading data into memory, it is easy to cause memory overflow.
In order to prevent memory overflow, you can put the data in HDFs, and then put the data in the small table in the memory, the log log will be read out to match the small table, there is no impact, but the data results are delayed, calculated because the matching loop too much, data processing speed is too slow.
The data is stored in HDFs, then the first two bits of the IP address in login as key, and the first two bits in the IP segment as key, such as the actual login IP is 10.8.16.24, and the IP field is 10.8.16.0-------10.8.17.25, At this point two tables at the same time 10.8 as the key value, so you can do the left and right join, and at this time the number of cycles is greatly reduced, efficiency is relatively fast.
Hadoop at the beginning