How to detect memory leakage in linux

Source: Internet
Author: User
Tags map data structure
How to detect memory leakage in linux-general Linux technology-Linux technology and application information. This article discusses how to detect the memory leakage of C ++ programs in linux and its implementation. This includes the basic principles of new and delete in C ++, the implementation principles and specific methods of the memory detection subsystem, and advanced topics of Memory leakage detection. As part of the implementation of the memory detection subsystem, A Mutex class is provided with better features.
1. Development Background

When using VC Programming in windows, we usually need to run the program in DEBUG mode. Then, when the debugger exits the program, it will print out the memory Information allocated on the stack while the program is running but not released, this includes the code file name, row number, and memory size. This function is a built-in mechanism provided by the MFC Framework and encapsulated in its class structure system.

In linux or unix, our C ++ program lacks proper means to detect memory information. Instead, we can only use the top command to observe the total amount of dynamic memory of the process. When the program exits, we cannot know any memory leakage information. To better assist Program Development in linux, we designed and implemented a memory detection subsystem in our class library project. The following describes the basic principles of new and delete in C ++, describes the implementation principles and techniques of the memory detection subsystem, and discusses the advanced topics of Memory Leak Detection.

Back to Top

2. Principles of New and delete

When we write new and delete in the program, we actually call the new operator and delete operator built in the C ++ language. The so-called built-in language means that we cannot change its meaning, and its functions are always consistent. Taking new operator as an example, it always allocates enough memory before calling the corresponding constructor to initialize the memory. Delete operator always calls this type of destructor and then releases the memory (). What we can exert influence on is actually the method for allocating and releasing memory during the execution of new operator and delete operator.

The name of the function called by new operator for memory allocation is operator new, which is usually in the form of void * operator new (size_t size); the return value type is void *, this function returns an unprocessed (raw) pointer with uninitialized memory. The parameter size determines how much memory is allocated. You can add an additional parameter to overload the function operator new, but the first parameter type must be size_t.

The delete operator is called to release the memory. Its name is operator delete, which is usually in the form of void operator delete (void * memoryToBeDeallocated). It releases the memory zone to which the input parameter points.

There is a problem here, that is, when we call new operator to allocate memory, there is a size parameter indicating the size of memory to be allocated. However, when the delete operator is called, there is no similar parameter. How does the delete operator know the size of the memory block to which the pointer is to be released? The answer is: for the system's own data types, the language itself can distinguish the size of memory blocks, and for Custom Data Types (such as our custom classes ), then, operator new and operator delete need to transmit information between each other.

When we use operator new to allocate memory for a user-defined object, the memory we actually get is larger than the memory of the actual object. In addition to storing object data, you also need to record the memory size. This method is called cookie. The implementation at this point is based on different compilers. (For example, MFC chooses to store the actual object data in the header of the allocated memory, while the latter part stores the boundary mark and memory size information. G ++ stores the relevant information in the first four parts of the allocated memory, and the actual data of the object is stored in the memory later .) When we use delete operator to release memory, delete operator can correctly release the memory block pointed to by the pointer based on this information.

The above discussion is about memory allocation/release for a single object. When we allocate/release memory for the array, although we still use new operator and delete operator, however, the internal behavior is different: new operator calls operator new [], the brother of the Number Group version of operator new, and then calls the constructor for each array member. Delete operator first calls the Destructor for each array member, and then calls operator delete [] to release the memory. Note that when we create or release an array consisting of custom data types, the compiler must identify the size of memory blocks to be released in operator delete, the cookie technology related to the compiler is also used.

To sum up, if we want to detect memory leaks, We must record and analyze the memory allocation and release in the program, that is, we need to reload operator new/operator new []; operator delete/operator delete [] Four global functions are used to intercept the memory operation information we need to check.

Back to Top

3. Basic implementation principle of memory Detection

As mentioned above, to detect memory leaks, you must record the memory allocation and release in the program. The best way to do this is to reload all forms of operator new and operator delete, the memory operation information during the execution of new operator and delete operator is intercepted. The reload form is listed below

Void * operator new (size_t nSize, char * pszFileName, int nLineNum) void * operator new [] (size_t nSize, char * pszFileName, int nLineNum) void operator delete (void * ptr) void operator delete [] (void * ptr)

We have defined a new version for operator new. In addition to the required size_t nSize parameter, the file name and row number are added, the file name and row number here are the file name and row number of the new operator when called. This information will be output when Memory leakage is detected to help you locate the specific location of the leak. For operator delete, because a new version cannot be defined for it, we directly overwrite the two global operator delete versions.

In the overloaded operator new function version, we will call the corresponding version of the Global operator new and pass in the corresponding size_t parameter. Then, we record the pointer value returned by the global operator new and the file name and row number of the allocation. The data structure used here is an STL map with the pointer value as the key value. When operator delete is called, if the call method is correct (the call method is described in detail later ), we can find and delete corresponding data items in map with the passed pointer value, and then call free to release the memory block pointed to by the pointer. When the program exits, the remaining data items in the map are the memory leakage information we are trying to detect-the allocation information that has been allocated on the stack but has not yet been released.

The above is the basic principle of memory detection implementation. There are still two basic problems not solved:

1) how to obtain the name and row number of the memory allocation code, and let new operator pass it to our overloaded operator new.

2) When to create a map data structure for storing memory data, how to manage it, and when to print Memory leakage information.

Solve Problem 1 first. First, we can use C's pre-compiled macros _ FILE _ and _ LINE __, the two macros are expanded as the file name and row number of the file at the specified position during compilation. Then we need to replace the default global new operator with the custom version that can pass in the file name and row number. We define in the subsystem header file MemRecord. h:

# Define DEBUG_NEW new (_ FILE __, _ LINE __)

Then add

# Include "MemRecord. h" # define new DEBUG_NEW

You can replace the call of the global default new operator in the client source FILE with the call of new (_ FILE __,__ LINE, the new operator in this form calls our operator new (size_t nSize, char * pszFileName, int nLineNum), where nSize is calculated and passed in by new operator, the file name and row number of the new call point are imported by the new operator of our custom version. We recommend that you add the above macro to all your source code files. If some files use the memory detection subsystem and some do not, the subsystem may output some leak warnings because it cannot monitor the entire system.

Let's talk about the second question. The map we use to manage customer information must be created before the customer program calls new operator or delete operator for the first time, and information leakage is printed after the last call of new operator and delete operator, that is to say, it needs to be born before the customer program and analyzed after the customer program exits. There is indeed one person who can accommodate the customer's program life cycle-the Global Object (appMemory ). We can design a class to encapsulate this map and insert and delete operations on it, and then construct a Global Object (appMemory) of this class, in the Global Object (appMemory) the constructor creates and initializes the data structure, and analyzes and outputs the remaining data in the data structure in its destructor. In Operator new, the insert interface of this global object (appMemory) will be called to record the pointer, file name, row number, memory block size and other information to map with the pointer value as the key, call the erase interface in operator delete to delete the data items in the map corresponding to the pointer value. Do not forget to perform mutex synchronization for map access, because at the same time there may be multiple threads for memory operations on the stack.

Well, the basic features of memory detection are ready. But don't forget, we have added an indirect layer in the global operator new to detect memory leaks, and added mutex to ensure secure access to the data structure, these will reduce the program running efficiency. Therefore, we need to enable the user to conveniently enable and disable the memory detection function. After all, the memory leakage detection should be completed in the debugging and testing phase of the program. We can use the Conditional compilation feature to use the following macro definition in the user's detected file:

# Include "MemRecord. h" # if defined (MEM_DEBUG) # define new DEBUG_NEW # endif

When you need to use memory detection, you can use the following command to compile the file to be detected:

G ++-c-DMEM_DEBUG xxxxxx. cpp

You can enable the memory detection function. When your program is officially released, you can remove the-DMEM_DEBUG compilation switch to disable the memory detection function to eliminate the efficiency impact of memory detection.

Shows the execution and detection results of Memory leakage code after the memory detection function is used.

Back to Top

4. Problems Caused by deletion of incorrect Methods

We have built a subsystem with the basic memory leak detection function. Let's take a look at some advanced topics about memory leakage.

First, when compiling a c ++ application, you sometimes need to create a single object on the stack, and sometimes you need to create an array of objects. We can know about the new and delete principles. For a single object and an object array, the memory allocation and deletion actions are very different, we should always use the new and delete forms that match each other correctly. However, in some cases, it is easy to make mistakes, such as the following code:

Class Test {};...... Test * pAry = new Test [10]; // creates an array of 10 Test objects. Test * pObj = new Test; // creates a single object ...... Delete [] pObj; // delete pObj in the form of a single object for memory release, but delete pAry in the form of a number // group is incorrectly used; // The array format should have been used to delete [] pAry for memory release, but the single-pair format is incorrectly used. //

What are the problems caused by mismatched new and delete statements? The C ++ standard's answer to this question is "undefined", that is, no one can assure you of what will happen, but one thing is certain: most of them are not good-in some compiler-formed code, the program may crash, while in some other code formed by the compiler, the program runs without any problems, but may cause memory leakage.

Since we know that the new and delete operations that do not match the form will cause problems, we need to expose this phenomenon without mercy. After all, we reload all the forms of memory operations operator new, operator new [], operator delete, operator delete [].

The first thing that comes to mind is that when you call operator new in a specific method (single object or array) to allocate memory, we can look at the data structure related to the pointer to the memory, add an item to describe its allocation method. When you call operator delete in different forms, we find the data structure corresponding to the pointer in map, and then compare whether the allocation method and release method match, if it matches, the data structure is deleted normally in the map. If it does not match, the data structure is transferred to a list called "ErrorDelete, when the program finally exits, it is printed together with the memory leakage information.

The above method is the most logical, but it does not work well in practical applications. There are two reasons. The first reason is as mentioned above: when the new and delete forms do not match, the result is "undefined ". If we are lucky-the program crashes when executing a non-matched delete, the data stored in our global object (appMemory) will no longer exist and no information will be printed. The second reason is related to the compiler. As mentioned above, when the compiler processes the new and delete operators of custom data types or custom data type arrays, the cookie technology related to the compiler is usually used. The possible implementation of this cookie technology in the compiler is: new operator first calculates the memory size required to accommodate all objects, and then adds the memory size required to record cookies, transmit the total capacity to operator new for memory allocation. When operator new returns the required memory block, new operator records the cookie information while calling the constructor to initialize valid data. Then, the pointer pointing to valid data is returned to the user. That is to say, the pointer applied for and recorded by the operator new that we reload is not necessarily the same as the pointer that the new operator returns to the caller (). When the caller passes the pointer returned by the new operator to the delete operator for memory release, if the call form matches, the corresponding form of delete operator will handle the opposite, that is, call the destructor of the corresponding number of times, find the whole memory address containing the cookie by pointing to the pointer of the valid data, and pass it to operator delete to release the memory. If the call form does not match, the delete operator will not perform the above operation, but directly pass the pointer to valid data (instead of the pointer to the entire memory) into the operator delete. Because we record the pointer of the entire memory allocated in operator new, but the pointer passed into operator delete is not, so it cannot be stored in the Global Object (appMemory) find the corresponding memory allocation information in the recorded data.

To sum up, when the calling modes of new and delete do not match, because the program may crash or the memory subsystem cannot find the corresponding memory allocation information, when the program finally prints "ErrorDelete", only some "lucky" Mismatched phenomena can be detected. But we have to do something, so we can't let this very dangerous mistake slip away from our eyes. Since we can't settle accounts in autumn, we will output a warning message in real time to remind users. When should I throw a warning? Very simple. When we find that the operator delete or operator delete [] is called, we cannot) find the memory allocation information corresponding to the passed pointer value in the map, we think we should remind the user.

Since we decided to output the warning information, the question now is: how can we describe our warning information to make it easier for users to locate unmatched deletion errors? Answer: print the name and row number of the delete call in the warning information. This is a bit difficult, because for operator delete, we cannot make an overloaded version with additional information like the object operator new. We can only keep the original appearance of its interface, the implementation is redefined. Therefore, only the pointer value can be input in operator delete. If the new/delete call format does not match, we may not be able to find the allocation information of the new call in the map of the Global Object (appMemory. What should we do? You have to use global variables. We defined two global variables (DELETE_FILE, DELETE_LINE) in the Implementation file of the detection subsystem to record the file name and row number when operator delete is called, at the same time, in order to ensure concurrent delete operations to access and synchronize the two variables, A mutex is also used (as to why CCommonMutex is not a pthread_mutex_t, the "implementation problem" section will be discussed in detail, here it serves as a mutex ).

Char DELETE_FILE [FILENAME_LENGTH] = {0}; int DELETE_LINE = 0; CCommonMutex globalLock;

Then, the following form of DEBUG_DELETE is defined in the header file of our detection subsystem.

Extern char DELETE_FILE [FILENAME_LENGTH]; extern int DELETE_LINE; extern CCommonMutex globalLock; // explained later # define DEBUG_DELETE globalLock. Lock (); \ if (DELETE_LINE! = 0) BuildStack (); \ (// see section 6) strncpy (DELETE_FILE, _ FILE __, FILENAME_LENGTH-1 ); \ DELETE_FILE [FILENAME_LENGTH-1] = '\ 0'; \ DELETE_LINE = _ LINE __; \ delete

Add one in the original macro definition of the user's detected file:

# Include "MemRecord. h" # if defined (MEM_DEBUG) # define new DEBUG_NEW # define delete DEBUG_DELETE # endif

In this way, the user will obtain the mutex lock before the file being detected calls the delete operator, and then assign values to the corresponding global variables (DELETE_FILE, DELETE_LINE) using the Call Point file name and row number, then, delete operator is called. When delete operator finally calls the operator delete defined by us, after obtaining the name and row number of the called object, the global variables (DELETE_FILE, DELETE_LINE) for the object name and row number are) re-Initialize and open the mutex lock, so that the next delete operator hanging on the mutex lock can be executed.

After making the above changes to the delete operator, when we find that the corresponding memory allocation information cannot be found through the pointer passed in by the delete operator, print the name of the called file and the warning of the row number.

There are no perfect things in the world. Since we provide a reminder to delete incorrect methods, we need to consider the following exceptions:

1. The third-party library functions used by the user include memory allocation and release operations. Or the implementation file for memory allocation and release in the user's detected process does not use our macro definition. Because we replace the global operator delete, the delete called by the user will also be intercepted. The user does not use the defined DEBUG_NEW macro, so we cannot find the corresponding memory allocation information in our global object (appMemory) data structure, but because it does not use DEBUG_DELETE, the two global DELETE_FILE and DELETE_LINE defined for delete do not have values, so we can not print the warning.

2. A user's implementation file calls new for memory allocation, but this file does not use our defined DEBUG_NEW macro. At the same time, the code in another implementation file of the user is responsible for calling delete to delete the memory allocated by the former, but unfortunately, the file uses the DEBUG_DELETE macro. In this case, the memory detection subsystem reports warning and prints the name and row number of the delete call.

3. In contrast to the second case, a user's implementation file calls new for memory allocation and uses our defined DEBUG_NEW macro. At the same time, the code in another implementation file of the user is responsible for calling delete to delete the memory allocated by the former, but the file does not use the DEBUG_DELETE macro. In this case, the warning is not printed because we can find the original information of the memory allocation.

4. when nested delete occurs (the definition is visible "implementation problems"), the first and third cases above may print incorrect warning information, detailed analysis shows the "implementation problems" section.

You may think this kind of warning is too casual and misleading. What should I do? As a detection subsystem, we adopt the principle of treating possible errors: we would rather report false positives than false negatives. Please "change it if you have any ".

Back to Top

5. Dynamic Memory leakage detection

The memory leakage detection described above can print out the memory allocation information that has been allocated on the heap but has not been released during the program running at the end of the entire life cycle of the program, the programmer can find and correct the memory leak in the program "Explicit. But if the program can release all the memory allocated by itself before the end, can it be said that there is no memory leakage in this program? Answer: No! In programming practice, we found the other two more harmful "implicit" memory leaks, which are manifested in the absence of any memory leaks when the program exits, however, when the program runs, the memory usage increases until the entire system crashes.

1. A thread of the program continuously allocates memory and stores the pointer pointing to memory in a data storage (such as list). However, no thread has been released during the program running. When the program exits, the memory block pointed to by the pointer value in the data storage is released in turn.

2. The N threads of the program allocate memory and pass the pointer to a data storage. M threads process data from the data storage and release the memory. Because N is much larger than M, or the data processing time of M threads is too long, the memory allocation speed is much higher than the memory release speed. However, when the program exits, the memory blocks pointed to by pointer values in the data storage are released in sequence.

The reason is that he is more harmful because it is not easy to find out this problem. The program may run continuously for dozens of hours without any problems and thus pass the rigorous system test. However, if the system runs hours in the actual environment, the system will crash from time to time, and the cause of the crash cannot be found from the log and program representation.

To solve this problem, we added a dynamic detection module MemSnapShot for running the program, the current total memory usage and memory allocation of the program are measured at a certain interval so that you can monitor the dynamic memory allocation of the program.

When the customer uses the MemSnapShot process to monitor a running process, the memory subsystem of the monitored process sends the memory allocation and release information to MemSnapShot in real time. MemSnapShot calculates the total memory usage of the process at a certain interval and uses the name and row number of the file that calls new for memory allocation as the index value, calculates the total amount of memory allocated but not released for each memory allocation action. In this way, if, in the statistical results of multiple consecutive intervals, the total memory allocated for a row of a file continues to grow, and the total memory volume does not reach a balance point or even fall back, it must be one of the two problems we mentioned above.

In terms of implementation, the Global Object (appMemory) constructor of the memory detection subsystem creates a Message Queue Based on its current PID key value, when operator new and operator delete are called, the corresponding information is written to the message queue. When the MemSnapShot process starts, you need to enter the PID of the detected process, then assemble the key value through the PID, find the Message Queue created by the detected process, and start to read the data in the message queue for analysis and statistics. When operator new information is obtained, the memory allocation information is recorded. When the operator delete message is received, the corresponding memory allocation information is deleted. At the same time, start an analysis thread, calculate the current memory information that has not been released but is allocated at a certain interval, and take the memory allocation location as the keyword for statistics, view the total memory allocated at the same location (with the same file name and row number) and the percentage of memory allocated to the process.

Is a running MemSnapShot program that monitors the dynamic memory allocation of processes:

The only technique for implementing MemSnapShot is to handle abnormal exit conditions of the detected process. The memory detection subsystem in the tested process creates a message queue for data transmission between processes. It is a core resource and has the same lifecycle as that of the kernel, it will not be released unless it is explicitly deleted or the system is restarted.

Yes, we can delete the message queue in the destructor of the Global Object (appMemory) in the memory detection subsystem, but if the detected process unexpectedly exits (CTRL + C, segment errors and crashes), the message queue is no longer in charge. So can we use the signal system in the constructor of the Global Object (appMemory) to call the system signal processing functions such as register SIGINT and SIGSEGV, and delete the message queue in the processing function? It is still not possible because the detected process may register its own signal processing function, which will replace our signal processing function. The final method we adopt is to use fork to generate an orphan process and use this process to monitor the survival status of the detected process. If the detected process has exited (whether normal or abnormal ), attempt to delete the Message Queue created by the process being detected. The implementation principle is briefly described below:

In the Global Object (appMemory) constructor, after a message queue is created successfully, we call fork to create a sub-process, and then the sub-process calls fork to create a sub-process again and exits, so that the Sun Tzu process becomes an orphan process (the orphan process is used because we need to disconnect the signal between the process being detected and the process we created ). The Sun Tzu process uses the Global Object (appMemory) of the parent process to obtain its PID and the ID of the created message queue, it is passed to MemCleaner, a new program image generated by calling the exec function.

The MemCleaner program only calls the kill (pid, 0) function to check the survival status of the detected process. If the detected process does not exist (normal or abnormal exit ), then, the kill function returns a non-zero value. In this case, we will clear possible message queues.

Back to Top

6. Implementation Issues: Nested delete

In the error method deletion problem section, we performed a minor operation on delete operator-added two global variables (DELETE_FILE, DELETE_LINE) used to record the file name and row number of the delete operation, and added a global mutex lock to synchronize access to the global variable (DELETE_FILE, DELETE_LINE. In the beginning, we used pthread_mutex_t, but in the test, we found the limitations of pthread_mutex_t in the current application environment.

For example, the following code:

Class B {...}; Class A {public: A () {m_pB = NULL}; A (B * pb) {m_pB = pb ;};~ A () {if (m_pB! = NULL) row number 1 delete m_pB; // The most terrible sentence}; private: class B * m_pB ;......} Int main () {A * pA = new A (new B );...... Row number 2 delete pA ;}

In the above Code, the delete pA statement in the main function is called "nested delete", that is, when we delete an object, another delete B action is executed in the destructor of object. When the user uses our memory detection subsystem, the delete pA action should be converted to the following action:

On the global lock global variable (DELETE_FILE, DELETE_LINE) assigned to the file name and row number 2 delete operator A call ~ The global lock global variable (DELETE_FILE, DELETE_LINE) on A () is assigned to the file name and the row number 1 delete operator B call ~ B () returns ~ B () call operator delete B to record the value of global variable (DELETE_FILE, DELETE_LINE) 1 and clear the value of global variable (DELETE_FILE, DELETE_LINE) to open the global lock. Return operator delete B and return delete operator B TO RETURN ~ A () call operator delete A to record the value of global variable (DELETE_FILE, DELETE_LINE) 1 and clear the value of global variable (DELETE_FILE, DELETE_LINE) to open the global lock. Return operator delete A and return delete operator

In this process, there are two technical issues: mutex reentrant and global variable (DELETE_FILE, DELETE_LINE) field protection during nested deletion.

The mutex re-entry problem refers to the situation where the lock is called multiple times consecutively for the same mutex in the context of the same thread, and then the unlock is called multiple times consecutively. This means that our application method requires mutex lock to have the following features:

1. The same mutex must be held multiple times in the context of the same thread. In addition, only the unlock with the same number of calls in the context of the same thread can discard the possession of the mutex.

2. for attempts to hold mutex in different thread contexts, only one thread can hold the mutex at a time, and other threads can hold the mutex only after it releases the mutex.

The Pthread_mutex_t mutex does not have the above features. Even in the same context, the second call to pthread_mutex_lock will be suspended. Therefore, we must implement our own mutex. Here we use the semaphore feature to implement a mutex CCommonMutex that complies with the preceding features (see the attachment for source code ).

To support Feature 2, a semaphore is encapsulated in the CCommonMutex class and the resource value is 1 and the initial value is 1 in the constructor. When the CCommonMutex: lock interface is called, sem_wait is called to obtain semaphore, so that the semaphore resource is 0 and other threads that call the lock interface are suspended. When CCommonMutex: unlock is called, sem_post is called to restore the semaphore resource to 1, so that one of the other suspended threads holds the semaphore.

In addition, to support feature 1, The CCommonMutex adds the pid judgment for the current thread and the access count for the current thread. When the thread calls the lock interface for the first time, when we call sem_wait, record the current Pid to the member variable m_pid, and set the access count to 1. The same thread (m_pid = getpid ()) multiple subsequent calls will only count and not suspend. When the unlock interface is called, if the count is not 1, you only need to decrease the access count until the descending access count is 1 to clear the pid and call sem_post. (See the attachment for specific code)

The field protection for global variables (DELETE_FILE, DELETE_LINE) during nested deletion refers to the global variables (DELETE_FILE, DELETE_LINE) When delete m_pB is called in the destructor of A in the preceding step) the assignment of the file name and row number will overwrite the assignment of the global variable (DELETE_FILE, DELETE_LINE) when the delete pA is called in the main program, resulting in the execution of operator delete, all delete pA information is lost.

To protect the global information on site, it is best to use the stack. Here we use the stack container provided by STL. In the definition of the DEBUG_DELETE macro, before assigning values to the global variable (DELETE_FILE, DELETE_LINE), we first determine whether a value has been assigned to the global variable (DELETE_FILE, DELETE_LINE) -- check whether the row number variable is equal to 0, if the value is not 0, the existing information stack should be pressed (call a global function BuildStack () to press the current global file name and row number data into a global stack globalStack ), then assign a value to the global variable (DELETE_FILE, DELETE_LINE) and call delete operator. In the erase interface provided by the Global Object (appMemory) of the memory subsystem, if the input file name and row number are 0, it means that the data we need may be overwritten by nested deletion, so we need to pop up the corresponding data from the stack for processing.

Now the nested deletion problem is basically solved, but when the first and third exceptions described at the end of the section "nested deletion and" problems caused by incorrect deletion "appear at the same time, the above mechanism may be faulty because the user's delete call does not use the defined DEBUG_DELETE macro. The root cause is that we use stack to retain the delete information recorded through our DEBUG_DELETE macro, so that it can be used in the erase interfaces of operator delete and Global Object (appMemory, however, the user's delete operation without the DEBUG_DELETE macro has not performed the stack operation but directly called the operator delete operation. The delete information that does not belong to this operation may pop up, the order and validity of stack information are damaged. Therefore, when the memory allocation information corresponding to this and subsequent delete operations cannot be found, the wrong warning information may be printed.

Back to Top


The above is the principle and technical solution of the Memory Leak Detection subsystem we have implemented. The source code of the first version has been strictly tested by the system in the attachment. But limited to our C ++ knowledge level and programming skills, in the implementation process certainly has not noticed the place or even the defect, hope to be able to get everyone's correction, my email is

Based on the memory detection subsystem we have implemented, we can continue to build a memory allocation optimization subsystem to form a complete memory subsystem. The implementation scheme of a memory allocation optimization subsystem is to allocate a large amount of memory at a time and manage it using a specific data structure. When memory allocation requests arrive, A specific algorithm is used to define a required part of the large memory for the user. After the user completes using the algorithm, it is classified as idle memory. This memory optimization method converts memory allocation and release into simple data processing, greatly reducing the time required for memory application and release.


1. More effective C ++ Scott Meyers, translated by Hou Jie

2. Balanced c ++ Scott Meyers, translated by Hou Jie

3. Deep Exploration of C ++ object model Stanley B. Lippman, translated by Hou Jie

4. Advanced Programming in Unix environment

5. Source Code: Detection subsystem, dynamic memory monitoring, custom mutex class source code, simple demo program
Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.