One of pig philosophy--pigs Eat anything. Pig can load data from different data sources and handle data in different formats. Pig uses Loader/store for data loading and storage, optionally using schema to specify data column names and types. If a schema is not specified when the data is loaded, the data column is not named, the type defaults to a byte array (ByteArray), and in subsequent operations pig can refer to the data column by the positional parameter, which is automatically typed based on the operation on the data column. From the performance and readability considerations, it is a good idea to specify a schema when loading data.
Loader system
The base class for loader is Org.apache.pig.LoadFunc, which sets the interface that loader needs to implement and provides some default implementations. The following figure is the loader inheritance system, for different data sources, pig implemented a large number of loader, including Hbasestorage and Parquestloader, can handle the column-type storage. The default loader is pigstorage.
The three basic methods in Org.apache.pig.LoadFunc determine the where/what/how:
Public Abstractvoidsetlocation (String location, Job job) throws IOException public
Abstractinputformat Getinputformat () throws IOException public
Loadcaster Getloadcaster () throws IOException {return
new Utf8storageconverter ();
}
Specifies the load location.
Specifies the data source type, using HDFs's inputformat to process different data sources.
How to process data from byte arrays to actual type conversions, using the default Utf8storageconverter
Pigstore Analysis
1 processing the compression format, loading the different InputFormat by loading the file suffix:
@Override public
InputFormat Getinputformat () {
if (Loadlocation.endswith (". bz2") | | Loadlocation.endswith (" . BZ ")) {return
newbzip2textinputformat ();
} else {return
newpigtextinputformat ();
}}
2 Read data: Before reading the data, the surrogate Preparetoread method sets InputFormat corresponding Recordreader, reads each line of data by Recordreader, processes each line of text according to the delimiter specified by the user, and finally converts the infinitesimal group.
public void Preparetoread (Recordreader reader,pigsplit split)
@Override public
Tuple GetNext () throws IOException
3 schema processing, in the GetNext method, if a schema exists, the Applyschema method is applied to the tuple to specify the name and type of the data item in the tuple.
Other important interfaces
By implementing some other interfaces, loader can provide some additional functionality
Loadmetadata
Automatically loading schema by GetSchema method
The partitioning key of the data is set through the Getpartitionkeys method, and the partitioning key in the user query condition is passed to loader directly through Setpartitionfilter to reduce the data loading. See implementation in Hcatloader, note: org.apache.hcatalog.pig.HCatLoader