Why analyze the performance of GOB serialization formats
I made it up. A one-way synchronization software (Https://gitee.com/rocket049/mysync), mixed RPC
and HTTP
server functions, the use of RPC
control functions, HTTP
data upload. Recently, I intentionally simplified its construction, the HTTP upload function is also implemented in the RPC
way. But I worry about the performance degradation, because the usual object serialization will result in an increase in the amount of data, such as JSON
after serialization, the binary data becomes 16 binary data and the amount of data is multiplied. So I tested the gob
volume changes before and after serialization.
Test method
I wrote a small program, the parameter is the input file, the file is converted to a struct, which contains a file name ( strring
) and all the data Array ( []byte
), and then use golang
the standard library to encoding/gob
serialize the structure of the body to another file, Then compare the size of the input file and output file.
Test program
Here is the source code and usage of the test program:
import ( "encoding/gob" "io/ioutil" "os")type FileAll struct{ Name string Cxt []byte}func main(){ var fa1 FileAll var err error fa1.Name = os.Args[1] fa1.Cxt,err = ioutil.ReadFile( os.Args[1] ) if err != nil{ panic(err) } enc := gob.NewEncoder(os.Stdout) enc.Encode(fa1)}
Usage:gob1 输入文件 > 输出文件
Test results
Regardless of the size of the input file, the output file is always about 50 bytes larger than the input file, considering the cost of saving the format information of the structure itself, the amount of data is almost no increase. This shows that the gob
serialization format is well suited for network transport.
Based on this conclusion, I modified my program, the process of uploading files to RPC
implement a similar operating system file operation mode: CreateFile->WriteBytes->CloseFile
. And before the file data is serialized, the file data is compressed into the structure in a gzip
format, further reducing the need for bandwidth.