COCKROACHDB GC Optimization Summary

Source: Internet
Author: User
Tags cockroachdb
This is a creation in Article, where the information may have evolved or changed.

A few weeks ago we shared a post about why we chose go language to write cockroachdb, and we received some questions about how we solved some of the known issues with the go language, especially with regard to performance, GC, and deadlock issues.

In this article we will share several very useful optimization techniques to improve many of the common GC performance issues (which will then cover some interesting deadlock issues). We'll focus on how to embed the structure and use sync. Pool, and reuse back-end arrays reduce memory allocations and reduce GC overhead.

Reduced memory allocation and GC optimization

The go language, which distinguishes go from other languages such as Java, allows you to manage memory layouts. With the go language, you can merge fragments, while other garbage collection languages cannot.

Let's look at a small piece of code in COCKROACHDB that reads data from disk and decodes it:

metaKey := mvccEncodeMetaKey(key)var meta MVCCMetadataif err := db.GetProto(metaKey, &meta); err != nil {    // Handle err}...valueKey := makeEncodeValueKey(meta)var value MVCCValueif err := db.GetProto(valueKey, &value); err != nil {    // Handle err}

To read the data, we performed 4 memory allocations: mvccmetadata structure, mvccvalue structure, and Metakey, Valuekey. In the go language, we can reduce the memory allocation to key by merging the structure and pre-allocated space to 1 times.

type getBuffer struct {    meta MVCCMetadata    value MVCCValue    key [1024]byte}var buf getBuffermetaKey := mvccEncodeKey(buf.key[:0], key)if err := db.GetProto(metaKey, &buf.meta); err != nil {    // Handle err}...valueKey := makeEncodeValueKey(buf.key[:0], meta)if err := db.GetProto(valueKey, &buf.value); err != nil {    // Handle err}

We declare a GetBuffer type that contains two different structures: Mvccmetadata and Mvccvalue (all Protobuf objects), and the third member uses an array, unlike the commonly used slices.

Without additional memory allocation, you can define a fixed-length array (bytes) directly in the struct, which allows us to place three objects in the same getbuffer struct. This reduces the memory allocation of 4 times to 1. Two different keys to note we use the same array, which works correctly when two keys are not used at the same time. We'll discuss the array later.

Sync. Pool

var getBufferPool = sync.Pool{       New: func () interface{} {              return &getBuffer{}       },}

To tell you the truth, it took us a while to figure out why sync. Pool is what we want. There is no limit to the use of the same object in a GC cycle without multiple memory allocations, and GC is responsible for recycling. The objects in the pool are cleared each time the GC is started.

An example is used to illustrate how to use sync. Pool:

buf := getBufferPool.Get().(*getBuffer)defer getBufferPoolPut(buf)key := append(but.key[0:0], ...)

First you need to use a factory function to declare a global sync. Pool object, in which we assign a getbuffer struct and return it. We no longer create a new getBuffer instead of getting from the pool. Pool.get returns an empty interface, we need to use the Type assertion transformation. When you are finished, put it back in the pool. The end result is that we don't have to allocate memory once every time we get getbuffer.

Arrays and slices

Some things may not be worth mentioning, in the go language arrays and slices are different types, and the slices and arrays are almost all the same. You can get a slice from an array using just one square bracket syntax [: 0].

key := append(bf.key[0:0], ...)

This uses an array to create a slice of length 0. The fact is that this slice already has a back-end store, meaning that the append operation of the slice is actually inserted into the array without allocating new memory. So when we decode a key, we can append into a slice created from this buffer. As long as the key is less than 1 KB in length, we do not need to do any memory allocations. We will reuse the memory allocated to the array.

It is possible but uncommon for a key to be more than 1 KB in length, in which case the program can automatically assign a new back-end array transparently, and our code does not need to do any processing.

Gogoprotobuf vs Google Protobuf

Finally, we used PROTOBUF to store all the data on the disk. However, we do not use the official Google Protobuf class library, we strongly recommend the use of a branch called Gogoprotobuf.

Gogoprotobuf follows many of the principles we mentioned above about avoiding unnecessary memory allocations. In particular, it allows data to be encoded into a backend using an array of byte slices to avoid multiple memory allocations. In addition, non-null annotations allow you to embed messages directly without additional memory allocation overhead, which is useful when you always need to embed messages.

The last point of optimization is the Google Standard Protobuf class library, which is encoded and encoded based on reflection, and Gogoprotobuf provides a good performance improvement using the encoding and decryption process.

Summarize

By combining these techniques, we have been able to minimize the performance overhead of GC and optimize for better performance. As we approach the testing phase and focus more on memory analytics, we will share our results in subsequent posts. Of course, if you know other go language performance optimizations, we're all ears.

Original link: http://www.cockroachlabs.com/blog/how-to-optimize-garbage-collection-in-go/
Original Author: Jessica Edwards
Translation proofreading: Betty, Dragon Cat, grapefruit

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.