The user-defined operator is used to implement the memory allocation policy. In some cases, the program efficiency can be improved.
 
 
 
The specific workflow of the new operator in C ++ is as follows:
 
 
 1. Call operator new to apply for the original memory
 
 2. Call the place new expression to execute the constructor of the class.
 
 3. Return memory address
 
 
The operation of the delete operator is:
 
 
 1. Call the destructor of an object
 
 2. Call operator Delete to release memory
 
 
For example:
 
# Include <iostream> using namespace STD; Class test {public: Test () {cout <"test" <Endl ;}~ Test () {cout <"~ Test "<Endl ;}}; int main (INT argc, char const * argv []) {// here, PT points to the original memory test * PT = static_cast <test *> (operator new [] (5 * sizeof (TEST ))); for (int ix = 0; ix! = 5; ++ IX) {New (Pt + ix) test (); // call and locate the new operator to execute the constructor} For (int ix = 0; ix! = 5; ++ IX) {pt [ix]. ~ Test (); // call the destructor, but the memory is not released} operator Delete [] (PT); // release the memory} 
 
 
A simple memory distributor base class is provided here. Any class that inherits this class has a custom operator new and operator Delete
 
This example is from version 4 of C ++ primer.
 
The general idea is to use static variables to maintain a linked list and manage idle memory blocks.
 
# Ifndef define # define cached_object_hpp # include <memory> # include <stdexcept> # include <iostream> // debugtemplate <typename T> class cachedobject {public: void * operator new (STD:: size_t); void operator Delete (void *, STD: size_t); Virtual ~ Cachedobject () {} protected: T * Next _; private: static void addtofreelist (T *); // Add the memory block to the linked list static STD: Allocator <t> alloc _; // memory distributor static T * freestore _; // linked list static const STD: size_t chunk _; // Number of allocated blocks at a time}; Template <typename T> STD:: Allocator <t> cachedobject <t >:: alloc _; Template <typename T> T * cachedobject <t >:: freestore _ = NULL; Template <typename T> const STD:: size_t cachedobject <t >:: chunk _ = 24; temp Late <typename T> void * cachedobject <t>: Operator new (STD: size_t sz) {If (SZ! = Sizeof (t) Throw STD: runtime_error ("cachedobject: Wrong size object in operator new"); STD: cout <"operator new" <STD :: endl; // debug // No idle memory if (freestore _ = NULL) {T * array = alloc _. allocate (chunk _); For (STD: size_t IX = 0; ix! = Chunk _; ++ IX) {addtofreelist (& array [ix]) ;}// retrieves a piece of memory and the first element T * P = freestore _ from the linked list _; freestore _ = freestore _-> cachedobject <t >:: next _; return P;} template <typename T> void cachedobject <t >:: operator Delete (void * P, STD: size_t) {STD: cout <"operator Delete" <STD: Endl; // debug if (P! = NULL) addtofreelist (static_cast <t *> (p);} template <typename T> void cachedobject <t >:: addtofreelist (T * P) {// use the header plug-in method p-> cachedobject <t >:: next _ = freestore _; freestore _ = P ;}# endif/* cached_object_hpp */ 
Each time we execute new, we call our custom operator new to retrieve a piece of memory from the idle linked list. If the linked list is empty, we will execute the real memory request operation.
 
At each delete operation, the memory is returned to the linked list.
 
This reduces the overhead for applying for memory for each new feature.
 
The test code is as follows:
 
# Include "cachedobject. HPP "# include <iostream> using namespace STD; // use the inherited policy to use this memory distributor class test: Public cachedobject <Test >{}; int main (INT argc, char const * argv []) {// call the custom new allocation memory test * PT = new test; Delete pt; // call the default new and delete PT values = :: new test;: delete pt; // do not call custom new and delete Pt = new test [10]; Delete [] PT ;} 
Simple memory distributor