A simple explanation of shared_ptr, weak_ptr, and unique_ptr in the gcc4.7.0 Library (throwing bricks for Jade, there is a picture with truth)

Source: Internet
Author: User

I have been practicing for one and a half months, And my internship has been so fast. liang Ge took great care and gave various guidance, not only in terms of technology, but also in the legend of liangshi Yiyou. after an internship, I felt a little more open-minded. I used to entertain myself like a child or a toy. Now I feel quite comfortable looking back -____-.

Now all the basic code writing is vim, And the IDE feels like a float cloud. GDB debugging is getting better. however, there is still no convenient debugger under Win, and gdbtui is still better, but it is always inferior to the OD level. debugging is easier. in Linux, I feel like I am in the door. (okay, the cainiao level)

The second version of the standard library just saw the smart pointer, so the two days have taken a look at the implementation of shared_ptr and weak_ptr in the 4.7.0 library.

First, shared_ptr:

Next is weak_ptr:

Because intelligent pointers introduce reference counting, one thing to consider is the atomicity of intelligent pointer reference counting update under concurrent circumstances. c ++ 11 provides native support for concurrency, the atomic type is also supported. in the implementation of the gcc4.7.0 library, the processing of concurrency has been taken into account (the previous version is not checked ), for example, the reference count and weak reference count of the pointer both use the _ atomic_word type (it should be an atomic type), and use atomic operations like _ exchange_and_add_dispatch when obtaining the pointer. _ lock_policy is used for reference counting blocks, which guarantees the robustness of smart pointers under concurrency.

As shown in the figure above, shared_ptr has the following members: 1. A pointer pointing to the memory area to be managed. Note that this part does not support members of the array type. In short, you can input an int as the template parameter, however, the template parameters such as int [] are not supported (supported by unique_ptr). If you want to support pointers of the array type, you need to input deleter by yourself, which is a function to release the memory. in short, the default Delete is delete _ m_ptr, rather than Delete [] _ m_ptr. Therefore, the default deleter.2. _ shared_count object _ m_refcount cannot be used in the array format, this object encapsulates some related operations. The main member (inside the rounded rectangle) is a pointer to the memory counting block, which is _ sp_counted_base * in the figure *
_ M_pi. _ sp_counted_base is a base class and inheritance generates _ sp_counted_ptr. This is the simplest case. In addition to two reference counts, the reference counting block also has a pointer to the management memory, _ sp_counted_deleter is passed in deleter (including alloc), while _ sp_counted_ptr_inplace provides alloc, And the next two are relatively advanced usage. alloc mainly references the allocation policy of counting block objects, deleter specifies the memory release behavior of the reference counting block to manage the memory (not just manage the memory, you can use shared_ptr to manage the file handle, but pass in the action to close the file handle in deleter ).

Smart pointer I personally think is a product that combines raiI and C ++ class compilation behavior, simply put, the constructor and destructor (the life cycle of local variables) are inserted during local variable compilation ), in this way, we can manage the resources owned by this local class variable through the control of such a local class variable constructor and destructor, if you are interested in the implementation of the handle class mentioned by BS (raiI example), you can find it. back to shared_ptr, we now have a local variable shared_ptr <t> sp; _ m_refcount that includes a pointer to reference the counting block. At the beginning of the SP lifecycle, create the class member _ m_refcount in the SP. if the reference counting block is managed in the _ m_refcount constructor, for example, when there is no such block, the reference count is increased sometimes. similarly, at the end of the SP life cycle, _ m_refcount is destructed, and the reference count is reduced in the destructor. When the reference count is 0, the managed memory is released. these are the ideas of raiI. note that there are two obvious allocation procedures in this process: one is the part of memory managed by _ m_ptr, and the other is the memory referenced by the counting block _ sp_counted_base.

From the two figures, we can see that both shared_ptr and weak_ptr contain a pointer to the memory management. shared_ptr is required because of the unreferenced operation, however, in weak_ptr, I don't think this pointer is necessary, because weak_ptr does not have unreferencing and other related operations. This is a design problem, perhaps to keep the structure consistent, more likely, I think it should be to improve efficiency. In shared_ptr, the corresponding pointer does not need to be obtained by referencing the counting block, but it increases some necessary maintenance costs, for example, the reset must be modified. After the reference count is 0, the resource must be released and the pointer must be set to nullptr.

Weak_ptr is used to solve the deadlock problem when shared_ptr references each other. If two shared_ptr references each other, the reference count of the two pointers will never be reduced to 0, and the resources will never be released. weak_ptr is actually a smart pointer for managing reference counting blocks. it needs to be initialized using shared_ptr. When _ m_weak_count is not 0, the reference counting block will not be released. From weak_ptr, you can determine whether the shared_ptr associated with it expires. at the same time, if it has not expired, you can get the corresponding shared_ptr, which is a weak reference relationship.

The implementation in the GCC library is extremely useful for the overload of constructors, and sometimes the matching of constructors is annoying. wei Ge said: I will know it if I try it. So if I have any questions, I 'd like to talk to you about GDB. A few days ago, I read type_trait, and I felt how humans could write such type_trait. It's invincible--it should be borrowed from boost. I think. the implementation of STL containers in the GCC library is a bit different from the analysis of STL source code. information found a few days ago. the memory cache policy is not used in the gcc stl distributor (Allocator). The main reason is that, according to the GCC documentation, the memory cache allocation policy will affect debugging of memory allocation problems, therefore, it uses the default new allocation policy. well, I think SGI's STL allocator is very handsome. in fact, this is the application of the cache mechanism. Viagra also said: the cache mechanism is universally applicable. worship Viagra.

Vomit at the end: For the jakin pot at night, please eat Yunnan cuisine. it's so hot that you don't dare to eat it--It's time ago that you suffered from chronic pharyngitis, but it's always worse. annoying. now I have a deep understanding that health is king!

-------------------------------- I am a split line ----------------------------------

Update:

Unique_ptr is a product used to replace the auto_ptr of C ++ 98, and there is no support for moving semantics (move semantics) in C ++ 98, therefore, the implementation of control transfer of auto_ptr does not support core elements, but it still implements the mobile semantics of auto_ptr. In this case, the copy constructor and the copy operation overload function are not perfect, the specific embodiment is to use auto_ptr as the function parameter. When it is passed in, the control is transferred to the function parameter. When the function returns, there is no transfer of control, so after the function call, the original auto_ptr has expired. with the moving semantics in C ++ 11, you can use move () to pass the unique_ptr into the function, so that you know that the original unique_ptr has expired. the moving semantics itself illustrates this problem. The standard description is that after moving, the original content is undefined, rather than throwing an exception, therefore, we still need to follow the game rules by humans. in addition, auto_ptr does not support deleter input, so it can only support single object (delete
Object), while unique_ptr has special Overloading on the array type, and has also been optimized accordingly, such as using [] to access the corresponding elements.

Compared with shared_ptr, unique_ptr has a relatively low performance consumption, mainly because the two are applicable in different scenarios. One is reference counting, that is, sharing, and the other is unique ownership. the introduction of reference counting brings about some memory overhead and maintenance costs. Therefore, when determining that it is only a unique ownership, the use of unique_ptr can reduce the corresponding performance overhead.

Finally, it is explained that smart pointers are NOT thread safe, and only some guarantees are provided in the standard. We haven't seen them yet, so we will have the opportunity to add them later. there is no problem with multi-thread reading and writing Based on the STL container. so I understood it for the moment. the GCC library contains some concurrent synchronization and atomic operations, but it is not studied in depth and does not know the effect to which it is guaranteed. if it is really high concurrency, you still need to lock yourself, I think. insurance.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.