[Knowledge] Why does the Java-language GC not release memory in real time?

Source: Internet
Author: User

Why does GC in languages such as the problem of Java not release memory in real time?

Here is Rednaxelafx's answer:

1. The most basic pure reference counting method of automatic memory management can be done in real time to release dead objects, but cannot handle the existence of circular references to the release of the object graph.

This problem can be solved to a certain extent by introducing the concept of weak references, but the reference count of the generic capable of working with circular reference objects is backed up by other management methods (usually some kind of tracing GC, such as Mark-sweep; also known as "Trial-deletion" Loop detection method, but this is usually less than tracing performance, for example CPython uses a reference count, Mark-sweep as a supplement, Adobe Flash's ActionScript VM 2 (AVM2) is also supplemented by a deferred reference count (DRC)-based, incremental/conservative mark-sweep. Conversely, std::shared_ptr like C + + is a pure reference count, unable to handle the object graph with circular reference by itself, but must rely on the programmer's own careful use, where necessary to use std::weak_ptr to break the loop CPython also uses a pure reference count before 2.0, cannot handle circular references, and can only wait to leak memory. Since the generic reference count has to be backed up with tracing GC, the implementation of this automatic memory management equals to two copies, and if you want to be lazy, you might as well just implement some kind of tracing GC in the first instance, such as Mark-sweep.

2. The most basic method of pure reference counting is very frequent for reference counters, where there is additional overhead, as to whether it is serious to be problematic to look at the tolerable extent of a particular application. with ample memory , the basic tracing GC performs better than the basic reference count (especially from a throughput perspective) and does not require redundant counter updates. At the same time, in the multi-threaded environment The reference counter may become the data shared between the threads, need to do the synchronous protection (here The atomic update is a synchronous protection of one), this is also a source of additional overhead, because the tracing GC does not need to maintain reference counters so there is no overhead of such synchronization. These performance drawbacks of reference counting can be mitigated by some advanced variants, such as the deferred reference count mentioned earlier in AVM2, which only records the reference count between objects on the heap and does not record the reference count of the object on the stack (primarily the expression temporary value), reducing the number of updates to the counter to improve performance. For details, refer to the Documentation: MMGC | MDN. These high-level variants of reference counts usually mean a certain amount of delayed release, which is inconsistent with the original intention of the landlord to release it in real time. On the other hand, although the most basic tracing GC will have a long delay, they also have advanced variants that can be executed concurrently, concurrently, incrementally, and reduce latency, and there are ways to implement the Thread-local GC to respond to "request-response" -style web App frees up the need for a thread to temporarily allocate objects.

3. If you use the tracing GC for automatic memory management, it does not explicitly maintain the reference count of the object, and there is no "reference count to 0" concept. Therefore, the runtime environment of a JVM or other language based on the tracing GC naturally does not "release objects by reference counting to 0".

4. Reference counting is also a classic lag situation. One example is a large number of objects, a long reference chain of the object graph if it is only a reference to live, then the death of that reference will cause a lot of objects crowding release (but not "bulk release", the cost is different), which will cause the same. Simply talking about the worst case scenario actually has such a bad side to the reference count. The purely artificial malloc ()/free () or new/delete can let programmers flesh out objects of the same life cycle, and then allocate memory for them in ways such as arena, so that they can actually release them when they die, so it's efficient. But the pure reference count is not what it is. Use a reference count to see how this lag is all about the reference relationship of the object graph in your program.

[Knowledge] Why does the Java-language GC not release memory in real time?

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.