Using Kryo serialization in Spark

Source: Internet
Author: User

Spark Serialization
For optimization < network performance > It is extremely important to save the RDD in a serialized format to reduce memory consumption.spark.serializer=org.apache.spark.serializer.javaserialization
Spark uses the Java-ObjectOutputStream framework to serialize objects by default, so that any object that implements the Java.io.Serializable interface can be serialized. You can also control serialization performance by extending the java.io.Externalizable. Java serialization is flexible, but performance is slow, and the number of bytes consumed after serialization is high.
spark.serializer=org.apache.spark.serializer.kryoserialization
Kryoserialization is fast and can be configured as a subclass of any org.apache.spark.serializer. But Kryo also does not support all types that implement the Java.io.Serializable interface, it requires that you register the type of serialization in the program to get the best performance.
Lzo support requires that the Hadoop-lzo package (each node) be installed first and placed in the Spark local library. If the Debian package is installed, add--driver-library-path/usr/lib/hadoop/lib/native/when calling Spark-submit--driver-class-path/usr/lib/ hadoop/lib/can do it. Download Lzo http://cn.jarfire.org/hadoop.lzo.html
Call Conf.set ("Spark.serializer", "Org.apache.spark.serializer.KryoSerializer") with Kryo when sparkconf is initialized. This setting not only controls the format of the mixed data serialization between each worker node, but also controls the serialization format of the RDD to disk. To register the types that need to be serialized when using, it is recommended to use Kryo in a network-sensitive scenario.If your custom type requires Kryo serialization, you can use the Registerkryoclasses method to register it first:
val conf = new Sparkconf.setmaster (...). Setappname (...)
Conf.registerkryoclasses (Array (Classof[myclass1], Classof[myclass2]))
val sc = new Sparkcontext (conf)
Finally, if you do not register a custom type that requires serialization, Kryo can also work, but the serialization result of each object instance contains a complete class name, which is a bit of a waste of space.
Use the new API (Twitter Elephant Bird package) Lzo Jsoninputformat to read the JSON file compressed by the Lzo algorithm in Scala:
val input = Sc.newapihadoopfile (Inputfile, classof[ Lzojsoninputformat], classof[longwritable], classof[mapwritable], conf)
inputfile: Input path
Receive First class:" Format class, input format
Receive second class: "Key"
Receive Second class: "Value"
conf: Set some additional compression options
Use the old API to directly read Keyvaluetextinputformat () The simplest Hadoop input format in Scala:
val input = sc. Hadoopfile[text, Text, Keyvaluetextinputformat] (inputfile). map{case (x, y) = = (X.tostring, y.tostring)}

Note: If you read a single compressed input, do not consider using Spark's encapsulation (textfile/sequencefile. ), instead, use Newapihadoopfile or Hadoopfile and specify the correct compression decoder. Some input formats, such as Sequencefile, allow us to compress only the values of key-value pairs in the data, which is useful when querying. Some other input formats also have their own compression controls, such as: Many of the formats in the Twitter Elephant Bird package can compress data using the LZO algorithm.




Using Kryo serialization in Spark

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.