C ++ memory and process, thread learning supplement (Memory leakage, semaphore)

Source: Internet
Author: User

First, I will discuss the knowledge about memory. When I mentioned memory leaks and frequent use of new and delete in the above blog post, I discussed it with my colleagues, in practice, it can be used to avoid this problem. I first studied the Lookaside List (View List) used by my colleagues: The View List is a group of memory blocks of the same size allocated in advance. When there is a memory allocation request, the system will traverse this list to find the nearest unallocated block. If unallocated blocks are found, the allocation request is quickly satisfied. Otherwise, the system must allocate data from the paging or non-Paging memory pool. Based on the frequency of allocation in the list, the system automatically adjusts the number of unallocated blocks to meet the allocation request. The higher the allocation frequency, the more blocks will be stored in the View list. When the memory is allocated from the View list, the pre-allocated memory block pointers are obtained. The method for saving the memory location. First, we can think of the View list as a memory container each time. At the beginning, we applied for a large memory for Windows, instead of applying for memory from Windows, you apply for memory from the lookaside object. lookaside intelligently avoids memory vulnerabilities. When there is a large amount of unused memory in it, it will automatically release the memory for Windows in the following two cases: 1. Apply for a fixed memory size each time. 2. The application and recycle operations are very frequent. I understand this. When new is applied, save the new address to the list declared as unused, when we use the memory, first find the memory address that is not used in this list. If there is an idle address, we will call it for use, delete the list that has never been used and add it to the list that has been declared as used. If the linked list is not used, a new part of memory will be added. In this way, one linked list is in use, and the other linked list is not in use. At the same time, we need to note that we should track the trend of this memory and delete it in that place. In this way, you need to note that when you use the list element, the wild pointer will appear if you do not change its position or move it to another list. For memory leakage, most of the cases are that there is no delete for new, and the easy leak is a new array, but delete is used to delete it. Then, when I wrote my first blog post, I talked about the semaphores. Because semaphores are rarely used at work, most of the work can be done by mutual exclusion. Mutex is used for thread mutex, and semaphores are used for thread synchronization. This is the fundamental difference between mutex and semaphore, that is, the difference between mutex and synchronization. Mutual Exclusion: A Resource allows only one visitor to access it at the same time, which is unique and exclusive. However, mutual exclusion cannot limit the access sequence of visitors to resources, that is, access is out of order. Synchronization: on the basis of mutual exclusion (in most cases), visitors can access resources in an orderly manner through other mechanisms. In most cases, synchronization has implemented mutex, especially when all resources are written. In rare cases, multiple visitors are allowed to access resources at the same time. The mutex value of www.2cto.com can only be 0/1, and the signal value can be a non-negative integer. That is to say, a mutex can only be used for mutex access to one resource. It cannot implement multi-thread mutex problem for multiple resources. Semaphores can achieve multi-thread mutex and synchronization of multiple similar resources. When the semaphores are single-value semaphores, mutual access to a resource can also be completed. The lock and unlock of mutex must be used by the same thread, and the semaphore can be released by one thread and the other. I don't know about other services. When writing performance test scripts, there will often be hundreds of threads that do the same thing, and then several management threads, at this time, in addition to the management thread, each thread is independent. Only when a file is written, changing the same variable and accessing the database may cause simultaneous resource access, the use of mutex locks will prevent the resource from being modified by two threads at the same time. After a deeper understanding of the difference between semaphores and mutex, I used semaphores in a performance test concept 'assemblies. Because I need to allow many distributed clients to run and pressure a resource at the same time, I need to issue the rush packet command as little as possible for every thread on the server that manages the distributed client, in this way, I can accurately know the pressure on the server caused by the number of concurrent connections at each set point.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.