The Stack and heap (the managed heap) are first in the virtual memory of the process. (4GB of virtual memory per process on 32-bit processors)
Stack stacks
The value type is stored in the stack.
The stack is actually padding down, that is, the high memory address points to the low memory address population.
The stack works by allocating memory variables and then releasing them (the advanced post-out principle).
The variables in the stack are released from the bottom up, which guarantees that the advanced rule in the stack does not conflict with the life cycle of the variable!
The performance of the stack is very high, but it is not very flexible for all variables, and the life cycle of the variables must be nested.
Usually we want to use a method to allocate memory for storing data, and the data can still be used for a long time after the method exits. This is where the heap (managed heap) will be used!
Heap (managed heap) heap
The heap (managed heap) stores reference types.
This heap is not the same heap. The heap in net is managed automatically by the garbage collector.
Unlike stacks, the heap is allocated from the bottom up, so the free space is above the used space.
For example, create an object:
Customer cus;
cus = new Customer ();
Declare a customer reference Cus, which allocates storage space on the stack for this reference. This is just a reference, not the actual customer object!
The Cus occupies 4 bytes of space and contains a reference address to store the customer.
The memory on the heap is then allocated to store an instance of the customer object, assuming that the instance of the Customer object is 32 bytes, in order to find a storage location on the heap where the customer object is stored.
. NET runtime searches the heap for the first unused, 32-byte contiguous block that stores an instance of a Customer object!
Then assign the address assigned to the Customer object instance to the CUS variable!
As can be seen from this example, the process of establishing an object reference is more complex than establishing a value variable, and it cannot avoid performance degradation!
Actually, it is. NET runtime saves the state information for the pair, and the reference variable in the stack is updated when new data is added to the heap. A lot of loss in performance!
There is a mechanism for allocating variable memory without being constrained by the stack: assigning the value of a reference variable to a variable of the same type, then the two variables refer to the object in the same heap.
When an application variable is out of scope, it is removed from the stack. But the data that references the object remains in the heap until the end of the program, or when the data is not applied by any variables, the garbage collector deletes it.
Boxing conversions
Using System;
Class Boxing
{
public static void Main ()
{int i=110;
Object obj=i;
i=220;
Console.WriteLine ("I={0},obj={1}", i,obj);
obj=330;
Console.WriteLine ("I={0},obj={1}", i,obj);
}
}
When defining the integer type variable i, the memory used by this variable is allocated in the memory stack, and the second sentence is that the boxing operation stores the variable 110 in the memory heap, while the variable obj that defines the object type is in the memory stack and points to the value 110 of the int type, which is a numeric copy of the variable i.
So the running result is
i=220,obj=110
i=220,obj=330
Memory patterns are typically divided into four zones
Global Data area: Storing global variables, static data, constants
Code Area: Store all the program code
Stack area: Local variables, parameters, return data, return address, etc. that are allocated for operation
Heap Area: Free Storage Area
A value type variable is not the same as the memory allocation model of a reference type variable. In order to understand this problem, the reader's first
You must first distinguish between two different types of memory areas: the thread stack and the managed heap (Managed heap).
Each running program corresponds to a process that, within a process, can have one or more
Thread, each thread has a piece of "private plots", called the thread stack, which is 1M in size to protect
Some of its own data, such as local variables defined in the function, parameter values that are passed when the function is called, and so on, this part of the memory
There is no need for programmer intervention to allocate and recycle zones.
Variables of all value types are allocated in the thread stack.
Another area of memory, called the Heap, is managed by the CLR under the. NET Hosting environment,
Also known as the managed heap (managed heap).
When you create an object of a class with the new keyword, the memory cells that are assigned to the object are located in the managed heap.
In the program we can arbitrarily create multiple objects using the New keyword, so the memory resources in the managed heap
can be applied and used dynamically, of course, it must be returned.
An analogy is easier to understand: the managed heap is equivalent to a hotel, where the room is equivalent to the memory owned in the managed heap
Unit. When a programmer creates an object with the new method, it is equivalent to a visitor making a reservation to the hotel, and the hotel administrator will first look
There is no suitable empty room, if any, you can make this room available to tourist accommodation. When the tourist journey is over,
Check-out, the room can also be for other passengers to provide services.
As you can see from table 1, there are four kinds of reference types: class type, interface type, array type, and delegate type.
All references to the object referenced by the type variable, whose memory is allocated in the managed heap.
Strictly speaking, the "object variable" We often say is actually a reference variable of the class type. But in practice, people often
A variable of a reference type is referred to as an "object variable", and it is used to refer to all four types of reference variables. In the unlikely cause of
Confusion, this practice is also used in this book.
After understanding the object memory model, the meaning of the reciprocal assignment between object variables is clear. Please see the following generations
Code (example Project Referencevariableforcs):
Class A
02 {
Geneva public int i;
04}
Class Program
06 {
The static void Main (string[] args)
08 {
A;
Ten a= new A ();
A.I = 100;
A B=null;
b = A; Reciprocal assignment of object variables
Console.WriteLine ("b.i=" + B.I); B.i=?
15}
16}
Note the 12th and 13 sentences.
The running result of the program is:
b.i=100;
Let the reader ponder: what does the reciprocal assignment of two object variables mean?
In fact, the reciprocal assignment of two object variables means that the contents of the memory units occupied by the two object variables after assignment
is the same.
To speak in detail:
The 10th sentence after the object is created, its first address (assuming "1234 5678") is placed into the 4 of the variable a itself
Bytes in the memory cell.
The 12th sentence also defines an object variable B, whose value is initially null (that is, the corresponding 4-byte memory cell is
"0000 0000").
After the 13th sentence is executed, the value of the A variable is copied to the memory unit of B, and now the value in B memory Unit is
"1234 5678".
Based on the object memory model described earlier, we know that variables A and B now point to the same instance object.
If you modify the value of the field I by B.I, A.I also changes synchronously, because A.I and B.I actually represent the same object
A field.
The whole process can be illustrated with figure 9来:
Figure
Figure 9 Reciprocal assignment of object variables
This leads to an important conclusion:
The reciprocal assignment of an object variable does not cause the object itself to be copied, as a result of two object variables pointing to the same object.
In addition, because the object variable itself is a local variable, the object variable itself is located in the thread stack.
Strictly distinguishing object variables from object variables is one of the keys to object-oriented programming.
Because an object variable resembles an object pointer, this results in "Judging whether two object variables refer to the same object"
The problem.
C # uses the "= =" operator to refer to the same object for two object variables, "! =" compared to two object variables
22
Whether to refer to different objects. See the following code:
A1 and A2 referencing different objects
A a1= new A ();
A a2= new A ();
Console.WriteLine (a1 = = a2);//output: false
A2 = A1;//a1 and A2 refer to the same object
Console.WriteLine (a1 = = a2);//output: True
It is important to note that if "= =" is used between variables of value type, then the contents of the variable are compared:
int i = 0;
int j = 100;
if (i = = j)
{
Console.WriteLine ("I and J values are equal");
}
Understanding the difference between a value type and a reference type is critical in object-oriented programming.
1, type, object, stack, and managed heap
C # types and objects generally use two kinds of memory when applying computer memory, a
Called the stack, and the other called the managed heap, below we use a rectangular rectangle to represent the stack,
Use rounded rectangles to represent the managed heap.
First, we discuss how to store the internal variables.
First, for example, there are two methods, method_1 and add, respectively, as follows:
public void Method_1 ()
{
int value1=10; 1
int value2=20; 2
int Value3=add (value,value); 3
}
public int Add (int n1,int n2)//4
{
Rnt SUM=N1+N2;//5
Return SUM;//6
}
The execution of this code is represented by a graph:
Each of the above pictures, basically corresponds to each step in the program. At the start of executing the Met
Hod_1, the value1 is pressed into the top of the stack, then value2,
Down is the call method add, because the method has two parameters N1 and N2, so
Press N1 and N2 separately into the stack, as this is called a method, and the
Method has a return value, so you need to save the return address of the add and then enter the ad
Inside the D method, inside the add, the sum is assigned the first value, so the sum is pressed
Entry, and return with return, where the previous return address
Return will be returned according to the address, in the return process, the sum
Launch the top of the stack, find the return address, but in the Method_1 method, we want to
Assigns the return value of add to Value3, at which point the return address is also pushed out of the stack,
Press the value3 onto the stack. Although the results of this example are not very useful here,
But this is a good example of how a variable and a stack are
Condition. It is also possible to see why the internal variables of the method are used later and cannot be
The reason for access in the method.
Next, consider the case of classes and objects in the managed heap and stack.
Check the code First:
Class Car
{
public void Run ()
{
Console.WriteLine ("All Normal");
}
Public virtual double GetPrice ()
{
return 0;
}
public static void Purpose ()
{
Console.WriteLine ("Manned");
}
PDF file Create FW w W using "Pdffactory Pro" trial version. f i n e p rint.cn
}
Class Bmw:car
{
public override double GetPrice ()
{
return 800000;
}
}
Above are two classes, one father a Son,son inherits father,
Because you have a virtual Buyhouse method in your class, the son class can re-
Write this method.
Next, look at the calling code.
public void method_a ()
{
Double CARPRICE;//1
Car car = new BMW ();//2
Carprice = car. GetPrice ();//Call virtual method (actually called after rewriting
The method)
Car. Run ();//Invoke instantiation method
Car.purpose ();//Call static method
}
This method is also relatively simple, that is, to define a variable to obtain the price, while
A variable of a parent class is defined and instantiated with a subclass.
Next, we take steps to explain.
Look at the run-time stack and the managed heap.
It should be explained here that the class is in the managed heap, and each class is divided into four
Class, a pointer to a class, used to associate an object, and a synchronous index to complete synchronization (such as a line
The static member is a class, so it appears in the class and
There is a list of methods (here the method list item corresponds to the specific method).
When the first step of the Method_a method is executed:
There is no value for the Carprice.
When the Method_a method executes to the second step, in fact the second step can be divided into
Car car;
Car = new BMW ();
Look at car car first;
Car here is a variable inside a method, so it is pressed onto the stack.
Look at car again = new BMW ();
This is an instantiation process, and car becomes an object
Here is a subclass to instantiate the parent type. Object is actually a subclass of type, but
The type of the variable is the parent class.
Next, call Car.getprice () in the call in Method_a,
For car, this method is a virtual method (and the subclass overrides it), and the virtual side
method on a type is not executed in the call, that is, the virtual party in the car class is not executed
method on the corresponding class of the object, that is, the gtprice in BMW.
If the method run () is executed in method_a, because run is a normal instance party
The Run method in the car class is executed.
If the purpose method of method_a is called, the variable car is not used
Instead of an object invocation, it is called with the class name car, because the static method
Allocates memory in a class. If you generate multiple instances with car, only one copy of the static members,
is in the class, not in the object.
33333333333333333333333333333333
In a 32-bit Windows operating system, each process can use 4GB of memory, thanks to the virtual addressing technology, which stores executable code in this 4GB memory, code-loaded DLLs, and all the variables that the program runs, in C #, there is an area of two storage variables in the virtual memory, one called a stack , a managed heap, called the managed heap, occurs where. NET differs from other languages where the stack stores the value type data, while the managed heap stores reference types such as classes, objects, and is controlled and managed by the garbage collector. In the stack, once a variable is out of use, the memory space it uses will be reused by other variables, and the value stored in its space will be overwritten by other variables, but sometimes we want the values to still exist, which requires the managed heap to be implemented. We use a few pieces of code to illustrate how it works, assuming that a class Class1 has been defined:
Class1 Object1;
Object1=new Class1 ();
The first sentence defines a Class1 reference, essentially allocating a 4-byte space in the stack, which will be used to store the address of an object in the managed heap, which in Windows requires 4 bytes to represent a memory address. The second sentence instantiates the Object1 object, actually secluded a memory space in the managed heap to store a specific object of class Class1, assuming that the object requires 36 bytes, then the Object1 point is actually the address that starts at a contiguous memory space of 36 bytes in the managed heap. It is also possible to see why non-instantiated objects are not allowed in the C # compiler because the object does not yet exist in the managed heap. When the object is no longer in use, the reference variable stored in the stack is deleted, but from the above mechanism it can be seen that the object that the reference points to in the managed heap still exists, when its space is freed up to the garbage collector instead of the reference variable when the scope is lost.
In the process of using the computer, you may have had this experience: the computer after a long time the program will become more and more slow, one of the important reason is that there is a lot of memory fragmentation in the system, because the program repeatedly in the stack to create and release variables, over time the available variables in memory will no longer be contiguous memory space, In order to address these variables, the system overhead is also increased. This situation will be greatly improved in. NET, this is because with the garbage collector's work, the garbage collector will compress the memory space of the managed heap, ensure that the available variables are in a contiguous memory space, and change the address in the reference variable in the stack to the new address, which will bring additional overhead, but the benefits will counteract that effect. Another benefit is that programmers will no longer spend a lot of time thinking about the internal leaks.
Of course, there are also value types and objects that cannot be managed by other managed heaps in a C # program, where there is not only a variable of reference type, but the release of these variables still requires the programmer to do so through a destructor or IDispose interface if the file name handles, network connections, and database connections.
On the other hand, in some cases C # programs also need to pursue speed, such as the operation of an array with a large number of members, if still using the traditional class to operate, will not be very good performance, because the array in C # is actually an instance of System.Array, which is stored in the managed heap, which will cause a lot of extra action on the operation, since the garbage collector, in addition to compressing the managed heap, updating the reference address, also maintains a list of information for the managed heap. Fortunately, in C #, it is also possible to encode using unsafe code in a way that C + + programmers typically like, using pointers in code blocks marked unsafe, which is no different from using pointers in C + +, where variables are stored in the stack, In this case, declaring an array can use the stackalloc syntax, such as declaring an array that stores 50 double types:
double* Pdouble=stackalloc Double[50]
Stackalloc will assign the pdouble array 50 memory spaces in the stack of double type size, you can manipulate the array using pdouble[0], * (pdouble+1), and as in C + +, you must know what you are doing when using the pointer. Ensure that you have access to the correct memory space, or there will be unexpected errors.
Mastering the workings and ways of managed heaps, stacks, garbage collectors, and unsafe code will help you become a truly good C # programmer.
Each thread in the process has its own stack, which is the address area that is preserved when a thread is created. Our "Stack memory" is here. As for "heap" memory, I personally think that when the new definition is not used, the heap should be a free space that is not "reserved" and not "committed", and the function of new is to keep (and commit) in these free spaces. ) out of an address range
A stack is a storage area that the operating system establishes for a process or thread (a thread in a multithreaded operating system) that has a FIFO feature that specifies the size of the required Stack at compile time. In programming, for example, C + +, all local variables are allocated from the stack of memory space, in fact, is not what allocation, just from the top of the stack to use the line, in the Exit function, just modify the stack pointer to the stack of the content can be destroyed, so the fastest.
Heap is the process by which the application requests the operating system to allocate its own memory when it is running, usually the application/grant procedure, and C/s, using malloc/new requests to allocate the heap, destroying the memory with Free/delete. Because of the memory allocation from operating system management, it takes time to allocate and destroy, so it is inefficient to use the heap! But the benefit of the heap is that it can be done very well, and C + + is not initialized for the allocated heap.
In Java, in addition to simple types (Int,char, etc.) are allocated memory in the heap, which is also a major reason for the slow program. However, unlike C + +, the allocation of heap memory in Java is automatically initialized. All objects in Java (including the wrapper Integer of int) are allocated in the heap, but the reference to the object is allocated in the stack. This means that when an object is created, memory is allocated from two places, the memory allocated in the heap actually establishes the object, and the memory allocated in the stack is simply a pointer (reference) to the heap object.
Of all the technologies in. NET, the most controversial is probably garbage collection (garbage COLLECTION,GC). As. NET Framework, the managed heap and garbage collection mechanisms are unfamiliar concepts for most of us. The managed heap will be discussed in this article, and how you will benefit from it.
Why do you want to host the heap?
. The NET Framework consists of a managed heap, all of the. NET language is used when assigning reference type objects. Lightweight objects such as value types are always allocated in the stack, but all class instances and arrays are generated in a pool of memory that is the managed heap.
The basic algorithm for the garbage collector is simple:
Mark all managed memory as garbage
Find the memory blocks that are being used and mark them as valid
Free all memory blocks that are not in use
Defragment the heap to reduce fragmentation
Managed heap Optimization
It may seem simple, but the steps that the garbage collector actually takes and the rest of the heap management system are not trivial, often involving an optimized design for improved performance. For example, garbage collection traverses the entire pool of memory with high overhead. However, research shows that most of the objects allocated on the managed heap have a short lifetime, so the heap is divided into three segments, called generations. The newly allocated object is placed in generation 0. The generation is the first to be recycled-and in this generation it is most likely to find memory that is no longer in use, because it is small enough to fit into the L2 cache of the processor, so recycling in it will be the fastest and most efficient.
Another optimization operation for the managed heap is related to the locality of reference rule. This rule indicates that objects that are allocated together are often used together. The performance of the cache will be improved if the objects are in a compact position in the heap. Because of the nature of the managed heap, objects are always assigned to successive addresses, and the managed heap is always compact, resulting in objects that are always close to each other and never separated very far. This is in stark contrast to the unmanaged code provided by the standard heap, where the heap can easily become fragmented, and the objects allocated together are often far apart.
There is also an optimization associated with a large object. In general, large objects have a long life span. When a large object is in the. NET managed heap, it is allocated in a special part of the heap, and this part of the heap is never sorted. The overhead of moving large objects exceeds the performance that can be improved by organizing this part of the heap.
Questions about external resources (External Resource)
The garbage collector is able to effectively manage the resources freed from the managed heap, but the resource reclamation operation is performed only when the memory is tight and a recycle action is triggered. So, how does a class manage a limited resource such as a database connection or a window handle? Wait until the garbage collection is triggered and then clean up the database connection or file handle is not a good way, which can severely degrade the performance of the system.
All classes that have external resources should execute the close or Dispose method when those resources are no longer in use. From Beta2: All Beta2 in this article refer to the. NET Framework Beta2, which is no longer specifically noted), the Dispose mode is implemented through the IDisposable interface. This will be discussed in the following sections of this article.
Classes that need to clean up external resources should also implement a termination operation (finalizer). In C #, the preferred way to create a terminate operation is to implement it in a destructor, whereas at the framework level, the implementation of the terminate operation is by overloading the System.Object.Finalize method. The following two methods for terminating operations are equivalent:
~overduebooklocator ()
{
Dispose (FALSE);
}
And:
public void Finalize ()
{
Base. Finalize ();
Dispose (FALSE);
}
In C #, terminating operations at the same time in the Finalize method and destructor will result in an error.
Unless you have enough reason, you should not create destructors or Finalize methods. Terminating operations can degrade the performance of the system and increase the memory overhead of the execution period. At the same time, you cannot guarantee when a terminating operation will be executed because of the way the terminating operation was executed.
Details of memory allocation and garbage collection
After having a general impression of the GC, let's discuss the details of the allocation and recycling work in the managed heap. The managed heap looks nothing like the traditional heap in our already familiar C + + programming. In a traditional heap, data structures are accustomed to using large chunks of free memory. Finding a block of memory of a specific size is a time-consuming task, especially when the memory is full of fragmentation. In contrast, in the managed heap, memory is made into contiguous arrays, and pointers are always moving between the memory that has been used and the memory that is not being used. When memory is allocated, the pointer simply increments-a benefit is that the efficiency of the allocation operation has been greatly improved.
When objects are allocated, they are initially placed in generation 0. When the size of the generation 0 is about to reach its upper limit, a recycling operation that is performed only in generation 0 is triggered. Since the size of generation 0 is small, this will be a very fast GC process. The result of this GC process is a thorough refresh of the generation 0. Objects that are no longer being used are freed, and the objects that are actually being used are sorted and moved into Generation 1.
When the size of the Generation 1 approaches its upper limit as the number of objects moved from generation 0 increases, a collection action is triggered to perform the GC process in generation 0 and Generation 1. As in generation 0, objects that are no longer being used are freed, and the objects being used are organized and moved into the next generation. The main goal of most GC processes is generation 0, because in generation 0 it is most likely that there are a large number of temporary objects that are no longer in use. The recycling process for generation 2 is expensive, and this process is triggered only if the GC process for generation 0 and Generation 1 does not release enough memory. If the GC process for Generation 2 still does not free enough memory, then the system throws a OutOfMemoryException exception
The garbage collection process for objects with terminate operations is slightly more complicated. When an object with a terminating operation is marked as garbage, it is not immediately released. Instead, it is placed in a terminating queue (finalization. Queue), which establishes a reference to this object to prevent the object from being recycled. The background thread performs their respective finalization for each object in the queue, and removes the object that has performed the terminate operation from the terminating queue. Only those objects that have performed the abort operation will be removed from memory during the next garbage collection. One consequence of this is that the object waiting to be terminated is likely to be moved into a higher level of generation before it is purged, thereby increasing the delay in which it is purged.
An object that needs to perform a termination operation should implement the IDisposable interface so that the client can quickly execute the terminating action through this interface. The IDisposable interface contains a method--dispose. This interface, introduced by BETA2, is implemented using a pattern that has been widely used before Beta2. Essentially, an object that needs to terminate the operation exposes the Dispose method. This method is used to release external resources and suppress termination operations, as demonstrated in the following program fragment:
public class Overduebooklocator:idisposable
{
~overduebooklocator ()
{
Internaldispose (FALSE);
}
public void Dispose ()
{
Internaldispose (TRUE);
}
protected void Internaldispose (bool disposing)
{
if (disposing)
{
Gc. SuppressFinalize (this);
Dispose of managed objects if disposing.
}
Free external resources here
}
}
Of all the technologies in. NET, the most controversial is probably garbage collection (garbage COLLECTION,GC). As. NET Framework, the managed heap and garbage collection mechanisms are unfamiliar concepts for most of us. The managed heap will be discussed in this article, and how you will benefit from it. Why do you want to host the heap? The NET Framework consists of a managed heap, all of the. NET language is used when assigning reference type objects. Lightweight objects such as value types are always allocated in the stack, but all class instances and arrays are generated in a pool of memory that is the managed heap. The basic algorithm for the garbage collector is simple: Mark all managed memory as garbage looking for memory blocks that are being used, and mark them as effectively freeing up all unused memory blocks to defragment the heap to reduce the fragmentation of managed heap optimizations that seem simple, But the steps that the garbage collector actually takes and the rest of the heap management system are not trivial, often involving an optimized design for improved performance. For example, garbage collection traverses the entire pool of memory with high overhead. However, research shows that most of the objects allocated on the managed heap have a short lifetime, so the heap is divided into three segments, called generations. The newly allocated object is placed in generation 0. The generation is the first to be recycled-and in this generation it is most likely to find memory that is no longer in use, because it is small enough to fit into the L2 cache of the processor, so recycling in it will be the fastest and most efficient. Another optimization operation for the managed heap is related to the locality of reference rule. This rule indicates that objects that are allocated together are often used together. The performance of the cache will be improved if the objects are in a compact position in the heap. Because of the nature of the managed heap, objects are always assigned to successive addresses, and the managed heap is always compact, resulting in objects that are always close to each other and never separated very far. This is in stark contrast to the unmanaged code provided by the standard heap, where the heap can easily become fragmented, and the objects allocated together are often far apart. There is also an optimization associated with a large object. In general, large objects have a long life span. When a large object is in the. NET managed heap, it is allocated in a special part of the heap, and this part of the heap is never sorted. The overhead of moving large objects exceeds the performance that can be improved by organizing this part of the heap. Questions about external resources (External Resource) The garbage collector is able to effectively manage the resources freed from the managed heap, but the resource reclamation operation executes only when the memory is tight and a recycle action is triggered. So, how does a class manage a limited resource such as a database connection or a window handle? Wait, straightIt is not a good practice to clean up the database connection or file handle after the garbage collection is triggered, which can severely degrade the performance of the system. All classes that have external resources should execute the close or Dispose method when those resources are no longer in use. From Beta2: All Beta2 in this article refer to the. NET Framework Beta2, which is no longer specifically noted), the Dispose mode is implemented through the IDisposable interface. This will be discussed in the following sections of this article. Classes that need to clean up external resources should also implement a termination operation (finalizer). In C #, the preferred way to create a terminate operation is to implement it in a destructor, whereas at the framework level, the implementation of the terminate operation is by overloading the System.Object.Finalize method. The following two methods for terminating operations are equivalent: ~overduebooklocator () {Dispose (false);} and: public void Finalize () {base. Finalize (); Dispose (FALSE); In C #, terminating operations at the same time in the Finalize method and destructor will result in an error. Unless you have enough reason, you should not create destructors or Finalize methods. Terminating operations can degrade the performance of the system and increase the memory overhead of the execution period. At the same time, you cannot guarantee when a terminating operation will be executed because of the way the terminating operation was executed. memory allocation and garbage collection details after the GC has a general impression, let's discuss the details of the allocation and recycling work in the managed heap. The managed heap looks nothing like the traditional heap in our already familiar C + + programming. In a traditional heap, data structures are accustomed to using large chunks of free memory. Finding a block of memory of a specific size is a time-consuming task, especially when the memory is full of fragmentation. In contrast, in the managed heap, memory is made into contiguous arrays, and pointers are always moving between the memory that has been used and the memory that is not being used. When memory is allocated, the pointer simply increments-a benefit is that the efficiency of the allocation operation has been greatly improved. When objects are allocated, they are initially placed in generation 0. When the size of the generation 0 is about to reach its upper limit, a recycling operation that is performed only in generation 0 is triggered. Since the size of generation 0 is small, this will be a very fast GC process. The result of this GC process is a thorough refresh of the generation 0. Objects that are no longer being used are freed, and the objects that are actually being used are sorted and moved into GeneratioN 1. When the size of the Generation 1 approaches its upper limit as the number of objects moved from generation 0 increases, a collection action is triggered to perform the GC process in generation 0 and Generation 1. As in generation 0, objects that are no longer being used are freed, and the objects being used are organized and moved into the next generation. The main goal of most GC processes is generation 0, because in generation 0 it is most likely that there are a large number of temporary objects that are no longer in use. The recycling process for generation 2 is expensive, and this process is triggered only if the GC process for generation 0 and Generation 1 does not release enough memory. If the GC process for Generation 2 still does not free enough memory, then the system throws a slightly more complex garbage collection process for objects that have outofmemoryexception exceptions with terminating operations. When an object with a terminating operation is marked as garbage, it is not immediately released. Instead, it is placed in a terminating queue (finalization. Queue), which establishes a reference to this object to prevent the object from being recycled. The background thread performs their respective finalization for each object in the queue, and removes the object that has performed the terminate operation from the terminating queue. Only those objects that have performed the abort operation will be removed from memory during the next garbage collection. One consequence of this is that the object waiting to be terminated is likely to be moved into a higher level of generation before it is purged, thereby increasing the delay in which it is purged. An object that needs to perform a termination operation should implement the IDisposable interface so that the client can quickly execute the terminating action through this interface. The IDisposable interface contains a method--dispose. This interface, introduced by BETA2, is implemented using a pattern that has been widely used before Beta2. Essentially, an object that needs to terminate the operation exposes the Dispose method. This method is used to release external resources and suppress termination operations, as demonstrated in the following program fragment: public class Overduebooklocator:idisposable {~overduebooklocator () { Internaldispose (FALSE); } public void Dispose () {internaldispose (true);} protected void Internaldispose (boOl disposing) {if (disposing) {GC. SuppressFinalize (this); Dispose of managed objects if disposing. }//Free external resources here}}
These are all. The concept of the CLR in net is not very much related to C #.
Code developed using the CLR-based language compiler is called managed code.
The managed heap is the basis for automatic memory management in the CLR. When a new process is initialized, the runtime retains a contiguous area of address space for the process. This reserved address space is referred to as the managed heap. The managed heap maintains a pointer that points to the address of the next object that will be allocated in the heap. Initially, the pointer is set to a base address that points to the managed heap.
Take a look at the MSDN Library and you'll find out what these concepts are.
The following code illustrates the very image:
Reference type (' class ' type)
Class Someref {public Int32 x;}
Value type (' struct ')
struct Someval (pulic Int32 x;}
static void Valuetypedemo ()
{
Someref r1=new someref ();//allocated in the managed heap
Someval v1=new someval ();//Stack on
r1.x=5;//parsing pointers
v1.x=5;//modifying on the stack
Someref r2=r1;//only copy references (pointers)
Someval v2=v1;//first allocates on the stack and then copies the members
r1.x=8;//changed the value of R1,R2.
v1.x=9;//changed the V1, and there was no change v2
}
4444444444444444444444444444444444444444444444444444444
A stack is an efficient area in memory that is fully used to store local variables or member fields (value type data), but is limited in size.
The managed heap occupies much more memory than the stack, when the access speed is slow. The managed heap is used only to allocate memory and is typically handled by the CLR (Common Language Runtime) to handle memory deallocation issues.
Allocates memory on the stack when the value type data is created;
When creating reference data, allocates memory on the managed heap and returns a reference to the object. Note that the reference to this object, like other local variables, is also stored in the stack. The value that the reference points to is in the managed heap.
If you create a reference type that contains a value type, such as an array, the value of its elements is stored in the managed heap instead of the stack. When retrieving data from an array, a copy of the locally used element value is obtained, and the copy is stored in the stack. Therefore, it is not generally said that "value types are stored in the stack, and reference types are saved in the managed heap."
The difference between a value type and a reference type: The reference type is stored in the unique location of the managed heap, which exists somewhere in the managed heap, referenced by a variable that uses the entity, and where the value types are stored where they are used, several of which exist.
For reference types, if you do not use the new operator when declaring a variable, the runtime does not assign it a memory space on the managed heap, but instead assigns it a reference on the stack that contains a null value. For a value type, the runtime assigns it space on the stack and invokes the default constructor to initialize the state of the object.
55555555555555555555555555555555555555555555555555
One, stack and managed heap
The common type System (CTS) distinguishes between two basic types: value types and reference types. The fundamental difference between them is how they are stored in memory. NET uses two different blocks of physical memory to store data-stacks and managed heaps. As shown in the following:
Two-type hierarchical structure
The CTS defines a type hierarchy, which does not only describe the different predefined types, but also points out that the user-defined types in the hierarchy
Article from: http://www.cnblogs.com/shenfengok/archive/2011/09/06/2169306.html
C # Stack and heap heap & stack