Memory Garbage Collection GC

Source: Internet
Author: User
Tags garbage collection strlen advantage

for Java GC details, see


Summary: Three kinds of memory recovery mechanism, reference counting method, tracking method and generational method. There are multiple classifications for the tracking method, and the GC mechanism for different platforms varies, mark one, and the back mind map

C + + does not have garbage collection, but it provides the API (malloc ()/Free () new ()/delete)), requests memory to the operating system when needed, and then returns the memory area to the operating system after use.


Recently in the company to do a lecture on garbage collection, I intend to use a few articles to organize the contents of the lecture, for your reference. Before we get started, we need to review the main ways of memory allocation a little bit, and most mainstream languages support three ways of memory allocation :

1. Static assignment: The allocation form of static variables and global variables
2. Automatic allocation: How to allocate memory for local variables in the stack
3. Dynamic allocation: How to dynamically allocate memory space in the heap to store data

How to manage the life cycle of a heap object is the topic we are going to explore. From an object-oriented perspective, the life cycle of each object should be managed by itself, that is, as an object, it knows when it was created and when it was destroyed. However, this is not the case, because objects have mutual referential relationships, so objects often do not know when they can declare death, if the object is released too early, it will cause a "dangling reference" problem, if released too late or not released, it will cause a memory leak .

The

provides an explicit memory management scheme in C + +, and can use Malloc/new to explicitly allocate a piece of memory, and when the memory is no longer needed, use Free/delete to return it to the system, looking at the following code: [CPP] view plain copy int main ()    {       string  *ptr = new string;       // do something        delete ptr;  }  

This process is very natural and clear, isn't it. Allocate a memory with new before use, and then destroy it with delete. Unfortunately, the real-world code is not all that simple, and developers can forget to release memory for a variety of reasons, or release memory incorrectly, causing memory leaks. The following is a typical example of a memory leak: [CPP] view plain copy void service (int n, char** names)     {       for (int i=0; i< n; i++)         {           char* buf =  ( char*)  malloc (strlen (names[i));           strncpy (buf, Names[i], strlen (Names[i]);       }      }   

Obviously the work of freeing up memory here should be done by the developer, but the developer forgot to call free ().       [CPP] view plain copy void service () {node* x = new Node ("Mid-Autumn");       node* ptr = x;       Delete x;        cout << ptr->data << Endl; }

The problem with this code is that it incorrectly calls delete, releasing X's memory prematurely and causing the last sentence to ptr->data error.       The trouble is far more than this, continue to see the following code: [CPP] view plain copy int main () {string *ptr = new string[100];   Do something delete ptr; }

This code looks beautiful, use new to allocate memory for 100 string objects, and finally use Delete to destroy, but unfortunately this code is still not correct, here are 100 string objects allocated by the last 99 string may not be deleted, The reason is that there is no use of the same form of new and delete, the correct should be [CPP] view plain copy int main ()    {       string *ptr = new string[100];         do something               Delete  [] ptr ;  }  

Notice the last one []. Simply put, [] is used when calling new, and [] is used when you call delete. But this rule is not as simple as we say, with a TypeDef, you might not use [] when calling new, but use [] when you call Delete, as in the following code: [CPP] view plain copy typedef string   ADDRESS[4];       int main () {string *ptr = new address;   Do something delete [] ptr; }

The nightmare is not over yet, if we have two types of array and Namedarray, where Namedarray public inherits from the array, as shown in the following code:[CPP] View plain copy template<class t>   class array    {        public:           array (Int lowbound,  int highbound);           ~array ();       private:           vector<T>  data;           size_t size;            int lBound, int hBound;  };   template<class t>   class namedarray : public array<t>     {       public:            namedarray (Int lowbound, int highbound, const string& name);    &NBSP;&NBSP;  private:           string* aName;      };  

When using the above two types, the developer wrote a piece of code: [CPP] view plain copy int main () {namedarray<int> *pna = new Namedarray<      ;int> (10,20, "Users");      Array<int> *pa;      PA = PNA;   Do something delete pa; }

See where the problem lies. The last line call to delete does not release the memory occupied by Aname because the Array type's destructor ~array () is not declared as virtual. Therefore, the destructor of the parent class does not exhibit polymorphic characteristics.

Through the above examples, presumably everyone has realized the difficulty of explicitly managing memory, so in C + + there is a concept of smart pointers, to reduce the possibility of developers to manually manage memory errors, the most common such as std::auto_ptr in the STL, essentially it is also a normal pointer, Only Std::auto_ptr calls the delete operator at the time of the destructor to automatically release the contained object. [CPP] view plain copy int main ()    {      auto_ptr<int > ptr (new int);            cout <<  *ptr << endl;      //  No more delete required   }  

In the famous boost C + + library, it contains a lot of smart pointers, which can be used in various situations, such as scope pointer boost::scoped_ptr, shared pointer boost::shared_ptr, and so on, as shown in the following code: [CPP] view Plain copy #include <boost/shared_ptr.hpp> #include <vector> int main () {std::vector<boost::      shared_ptr<int> > V;      V.push_back (boost::shared_ptr<int> (new int (1)));    V.push_back (boost::shared_ptr<int> (new int (2))); }

Object Lifecycle Management, in addition to the use of explicit management scenarios, there is another mechanism is implicit management, that is, garbage collection (garbage Collection, referred to as GC), first appeared in the world's second elder language Lisp, Jean E. Sammet once said , one of the most enduring contributions of the Lisp language is a non-linguistic feature, the terminology that represents the way in which the system automatically processes memory-garbage collection (Gc,garbage Collection). Now many platforms and languages support garbage collection mechanisms such as JVMs, CLR, Python, and so on.

This paper focuses on garbage collection algorithm. The garbage collection mechanism, which first appeared in the world's second elder language, Lisp,jean E. Sammet once said that one of the most enduring shares of Lisp is a non-linguistic feature, the term technology that represents the way the system automatically processes memory-garbage collection (gc,garbage Collection). Next we introduce several classic garbage collection algorithms that, despite appearing in the 60 and 70 's, are now used by the garbage collector, such as the CLR, JVM, and so on.

Reference Counting Algorithm

The reference count (Reference counting) algorithm is the number of pointers that each object calculates to point to it, plus 1 when a pointer points to itself, and when you delete a pointer to yourself, the value is reduced by 1, and if the count value is reduced to 0, there is no pointer to the object. So it can be safely destroyed. Can be very intuitive to use the following diagram to represent:

The advantage of the reference counting algorithm is that the overhead of memory management is very "smooth" during the entire application run, eliminating the need to suspend the running of the application for garbage collection , and its other advantage is that the locality of reference in the space is better, when the reference count value of an object becomes 0 o'clock, The system does not need to access the cells that are located on the other pages in the heap, and the following garbage collection algorithms we are going to see are going back to all the surviving units before recycling, which may cause a paging (Paging) operation, and the last reference counting algorithm provides a similar way to stack allocation. Scrap is recycled , and the next few garbage collection algorithms we're going to see will survive for a period of time after the object is discarded before being recycled.

The reference counting algorithm has many advantages, but its disadvantage is also very obvious. The first thing you can see is the overhead of the time, each time the object is created or released, the reference count value is computed , which raises some additional overhead, and the second is the overhead of the space , because each object maintains its own number of references, Additional space is required to hold the reference count value; The biggest drawback of the reference counting algorithm is that it cannot handle ring references , as shown in the following illustration:

The two objects in blue here are neither unreachable nor recyclable, because they are referenced to each other, their respective count values are not 0, which is powerless for the reference counting algorithm, while other garbage collection algorithms can handle ring references well.

The most famous use of the reference counting algorithm is Microsoft's COM technology, the famous IUnknown interface: [CPP] view Plain Copy interface IUnknown {virtual HRESULT _std       Call QueryInterface (const iid& IID, void* * PPV) = 0;       Virtual ULONG _stdcall AddRef () = 0;   Virtual ULONG _stdcall Release () = 0; }

The

AddRef and release are used to allow the component to manage its own lifecycle, and the client program cares only about the interface, without having to care about the component's life cycle, a simple use example is as follows: [CPP] view plain copy Int main ()    {       iunknown* pi = createinstance () ;       IX* pix = NULL;       hresult  hr = pi->queryinterface (iid_ix,  (void*) &pix);        if (SUCCEEDED (HR))        {            pix->dosomething ();           pix->release ();        }       pi->release ();  }   

The above client program has already called AddRef in CreateInstance, so there is no need to call again, and release is called after the interface is used, so that the count value that the component maintains itself will change.   The following code gives an example of a simple implementation of ADDREF and release: [CPP] view plain copy ULONG _stdcall AddRef () {return + + M_cref;           } ULONG _stdcall Release () {if (--m_cref = = 0) {Delete this;       return 0;   } return m_cref; }

In the programming language Python, use is also the reference counting algorithm , when the object reference count value is 0 o'clock, will call the __DEL__ function, as for why Python to choose the reference counting algorithm, according to an article I read that, because Python as a scripting language, It is often necessary to interact with these languages, while using the reference counting algorithm avoids changing the location of objects in memory, and Python introduces GC modules to solve ring-referencing problems. So in essence, the Python GC scheme is a mix of reference counting and tracking (three algorithms to be said later) two garbage collection mechanisms.

tag-purge algorithm

The tag-purge (Mark-sweep) algorithm relies on a global traversal of all surviving objects to determine which objects can be recycled, and the traversal process starts from the root to find all the objects that can be reached, except that other unreachable objects are garbage objects that can be recycled. The entire process is divided into two stages: the tagging phase finds all surviving objects, and the purge phase clears all junk objects.

Marking phase:

Purge phase:

Compared to the reference counting algorithm, the tag-purge algorithm can handle ring-referencing problems very naturally , and the cost of manipulating reference-count values is reduced when objects are created and objects are destroyed. Its disadvantage is that the tag-purge algorithm is a stop-start algorithm that the application must temporarily stop during the garbage collector run, so the study of the tag-purge algorithm reduces its pause time, and the generational garbage collector is meant to reduce its pause time, which is said later. In addition, the tag-purge algorithm needs to traverse all the surviving objects during the tagging phase, which can cause a certain overhead, resulting in a large amount of memory fragmentation when the garbage object is purged during the purge phase.

mark-and-Zoom algorithm

The mark-and-shrink algorithm is an algorithm that is created to solve the problem of memory fragmentation . Its entire process can be described as: marking all surviving objects, shrinking the object graph by re-adjusting the position of the surviving object, and updating the pointer to the object being moved.

Marking phase:

Purge phase:

The most difficult point of the tag-compression algorithm is how to choose the compression algorithm used, if the compression algorithm selection is not good, it will lead to great program performance problems, such as the result of low cache hit rate. In general, depending on the location of the compressed object, the compression algorithm can be divided into the following three kinds:

1. Arbitrary: Move objects without regard to their original order, and regardless of whether there is a cross-referencing relationship between them.
2. Linear: As far as possible the original object and the object it points to the adjacent position, so as to achieve better spatial locality .
3. Swipe: "Slide" the object to one end of the heap, "extrude" the free elements between the surviving objects, thus maintaining the original order of the allocations.

node Copy algorithm

The node copy algorithm divides the entire heap into two halves (from,to), and the GC process is actually the process of copying the surviving objects from one half to the other, while in the next collection, the two halves swap the roles. After the move is over, update the object's pointer reference before the GC starts:

When the GC ends:

Node copy algorithm because in the copy process, the memory can be collated, so there is no memory fragmentation problem, but also do not need to do a special memory compression. , and its biggest drawback is the need for double space . Summary

This article introduces a total of four classic garbage collection algorithms, of which three are often referred to as tracking garbage collection, because the reference counting algorithm can be a smooth garbage collection, without a "stop" phenomenon, often appear in some real-time systems, but it does not solve the annular problem, and based on the tracking garbage collection mechanism, In each garbage collection process, to traverse or replicate all the surviving objects, this is a very time-consuming work, a good solution is to partition the objects on the heap, the different areas of the object using a different garbage collection algorithm, the sub-generational garbage collector is one of them, The generational garbage collection mechanism is used in both the CLR and the JVM, but they are somewhat different in processing, and the next article details the differences between the two garbage collector types.

This article focuses on Microsoft's comparison of the CLR with the JVM garbage collector. We know that both the CLR and the JVM use the generational garbage collector, and the generational garbage collector is based on the following assumptions:

1. The more new The object, the shorter its lifetime
2. The older the object, the longer it will survive
3. Performing a GC on a part of a heap is faster than performing a GC on the entire heap

Although both the CLR and the JVM use a generational garbage collector, they are somewhat different in many ways: generational mechanisms, large object heaps, recycling patterns, recovery algorithms, and finding the efficiency of surviving objects. generational mechanism

In the CLR, objects can be divided into three generations by age: The No. 0, 1th, and 2nd generations, as shown in the following illustration:

In the three generations, the object of the Ascension process, we can refer to the "CLR via C #", there is a more detailed introduction.

Generational and old generation of objects in the JVM:

Recycle mode

Prior to CLR4.0, three different garbage collection modes were available: Workstation concurrent GC, workstation non-concurrent GC, and server GC, as shown in the following figure:

Workstation non-concurrent GC mode, there is no dedicated GC thread , but the worker thread is responsible for recycling, during the recycling process, you need to suspend the application , the application continues to run after recycling, so there will be application pauses during the recycling process:

Workstation concurrent GC mode, in order to solve the application pause problem caused by garbage collection, there will be a dedicated GC thread responsible for garbage collection , most of the time garbage collection can be executed concurrently by the application, but only for the full GC, and for the No. 0 generation, 1th generation objects, is still performed in non-concurrency mode, and concurrent garbage collection essentially sacrifices more CPU time and memory in exchange for a decrease in application pause time:

Server GC mode runs on a multi-CPU server , and if a server GC is configured on a single CPU machine, it will not do anything, and garbage collection will still be performed using workstation non-concurrency mode. The Server GC mode allocates a dedicated garbage collection thread and a managed heap for each CPU , and the garbage collection thread has a high priority, and the application worker thread is temporarily suspended during garbage collection execution:

A background garbage collection mechanism is provided in CLR 4.0 to replace concurrent GC.

The garbage collection used in the JVM (in the case of hotspots) is more complex, using different garbage collection modes for the new generation, the old generation on workstations and servers, as shown in the following illustration:

The default mode for the client side is the serial GC, while on the server side, the default methods for the Cenozoic and the old generation are: Parallel recycle GC and parallel GC:

The following illustration shows the difference between a default serial GC and a parallel GC, where the parallel GC divides the heap into zones, partitions for tagging and recycling , but both of these methods cause the application to be paused :

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.