Note memory when using asynchronous socket
Spike
This is an article I read online, the original address: http://morganchengmo.spaces.live.com/blog/cns! 9950ce918939932e! 3022. Entry
In. net, the memory is hosted by the system, and programmers do not need to worry about memory leaks. However, in asynchronous socket, this is not very reliable, although there will be no memory
Leak, but memory spike with similar features may appear.
According to kb947862 (http://support.microsoft.com/kb/947862), asynchronous APIs using socket and networkstream are unreliable, which is a very serious problem!
Let's take a look at what the stream asynchronous read interface looks like.
Public Virtual iasyncresult beginread (
Byte [] buffer, int offset, int
Count, asynccallback callback, object state)
When we use this API, we must first allocate a byte
Array is used as the buffer parameter. After beginread is called, The callback should be called back when data appears on stream or timeout theoretically. However, according to kb, if the other party stops I/O, this callback may never be called back. Byte passed as the buffer Parameter
Array,. Net to prevent it from being garbage
Collected, using an internal system. threading. the overlappeddata instance references it and creates an overlappeddata instance for each call. It is released at the end of the asynchronous operation. Now, because callback may never be called, asynchronous operations will never end, the corresponding overlappeddata will never be released. Similarly, the allocated bytes
Array will never be GC, and this byte array is equal to leak.
Recently, I found that one of my programs uses a very large amount of memory. Using windbg, I can see that the memory contains an abnormal amount of overlappeddata and byte.
Array because networkstream. beginread is used.
The solution provided on this KB is as follows:
The. NET library that allows asynchronous io with dot net sockets
(Socket. beginsend/socket. beginreceive/networkstream. beginread/
Networkstream. beginwrite) must have an upper bound on the amount of Buffers
Outstanding (either send or receive) with their asynchronous Io.
The network application shoshould have an upper bound on the number
* Outstanding *Asynchronous Io that it posts.
This solution is the same as it is not said.
What is bound? Is there any specific number? Should Every programmer who uses this API use a number and a number to try it? I don't think there is such a "upper
Bound "can avoid this problem.
The author believes that:
Memory spike mentioned in msdn is actually "memory
Leak ", let's see what's going on:
Using this asynchronous process, I create a socket. On this socket, I call beginreceive and give a callback function (This callback function must call the endreceive of this socket ), if the callback function is called within a given time (timeout), endreceive is called in the callback function, so that a pair of begin/end operations will be completed, this is normal. In an abnormal situation, the callback function is not called in timeout, and our program cannot wait forever. So I should abort this operation, you can choose to close this socket,In theory,. NET should release all related resources at this time, but as I mentioned in the original article, it may not do this... Net implementation has problems. In some cases, although we can release resources at the application layer. net does not think it is over, so it still retains the reference for the relevant resources, such as the byte
Array,My application does not referece this byte
Array, and the socket is closed.That is to say,. NET should release these resources, but. Net still retains
Array, so GC cannot recycle it.
The network application shocould
Have an upper bound on the number* Outstanding *
Asynchronous Io that it
Posts.
The meaning of this sentence is that the program should not maintain too many asynchronous I/O, and a program may have multiple external asynchronous I/O, and this limit is not given, this sentence has no guiding significance. net bug.
However, some readers agree with Microsoft's statement:
The network application shoshould have an upper bound on the number
* Outstanding *Asynchronous Io that it
Posts.
This is not unreasonable. You should think about it carefully. If there is no bug in a given connection, is 1 the upper limit? If you expect a response, should you call beginreceive before getting it (if your protocol does not have a problem )?
To use Asynchronization, you must maintain a clear state machine for a given connection. Otherwise, the memory you see will become messy.
Spike.
Memory spike is not equal to memory
Leak. If you remove the connection and release the secondary variable,. NET will automatically release the memory. This is a caller issue, not a Microsoft issue. The reason why C ++ is not a problem is that you must manually release the memory, so the caller naturally thinks this is his responsibility.
In short, when using asynchronous socket, memory spike is a place to pay attention to. In such a system, you still need to manually control the resources that cannot be effectively released.
This is an article I read online, the original address: http://morganchengmo.spaces.live.com/blog/cns! 9950ce918939932e! 3022. Entry
In. net, the memory is hosted by the system, and programmers do not need to worry about memory leaks. However, in asynchronous socket, this is not very reliable, although there will be no memory
Leak, but memory spike with similar features may appear.
According to kb947862 (http://support.microsoft.com/kb/947862), asynchronous APIs using socket and networkstream are unreliable, which is a very serious problem!
Let's take a look at what the stream asynchronous read interface looks like.
Public Virtual iasyncresult beginread (
Byte [] buffer, int offset, int
Count, asynccallback callback, object state)
When we use this API, we must first allocate a byte
Array is used as the buffer parameter. After beginread is called, The callback should be called back when data appears on stream or timeout theoretically. However, according to kb, if the other party stops I/O, this callback may never be called back. Byte passed as the buffer Parameter
Array,. Net to prevent it from being garbage
Collected, using an internal system. threading. the overlappeddata instance references it and creates an overlappeddata instance for each call. It is released at the end of the asynchronous operation. Now, because callback may never be called, asynchronous operations will never end, the corresponding overlappeddata will never be released. Similarly, the allocated bytes
Array will never be GC, and this byte array is equal to leak.
Recently, I found that one of my programs uses a very large amount of memory. Using windbg, I can see that the memory contains an abnormal amount of overlappeddata and byte.
Array because networkstream. beginread is used.
The solution provided on this KB is as follows:
The. NET library that allows asynchronous io with dot net sockets
(Socket. beginsend/socket. beginreceive/networkstream. beginread/
Networkstream. beginwrite) must have an upper bound on the amount of Buffers
Outstanding (either send or receive) with their asynchronous Io.
The network application shoshould have an upper bound on the number
* Outstanding *Asynchronous Io that it posts.
This solution is the same as it is not said.
What is bound? Is there any specific number? Should Every programmer who uses this API use a number and a number to try it? I don't think there is such a "upper
Bound "can avoid this problem.
The author believes that:
Memory spike mentioned in msdn is actually "memory
Leak ", let's see what's going on:
Using this asynchronous process, I create a socket. On this socket, I call beginreceive and give a callback function (This callback function must call the endreceive of this socket ), if the callback function is called within a given time (timeout), endreceive is called in the callback function, so that a pair of begin/end operations will be completed, this is normal. In an abnormal situation, the callback function is not called in timeout, and our program cannot wait forever. So I should abort this operation, you can choose to close this socket,In theory,. NET should release all related resources at this time, but as I mentioned in the original article, it may not do this... Net implementation has problems. In some cases, although we can release resources at the application layer. net does not think it is over, so it still retains the reference for the relevant resources, such as the byte
Array,My application does not referece this byte
Array, and the socket is closed.That is to say,. NET should release these resources, but. Net still retains
Array, so GC cannot recycle it.
The network application shocould
Have an upper bound on the number* Outstanding *
Asynchronous Io that it
Posts.
The meaning of this sentence is that the program should not maintain too many asynchronous I/O, and a program may have multiple external asynchronous I/O, and this limit is not given, this sentence has no guiding significance. net bug.
However, some readers agree with Microsoft's statement:
The network application shoshould have an upper bound on the number
* Outstanding *Asynchronous Io that it
Posts.
This is not unreasonable. You should think about it carefully. If there is no bug in a given connection, is 1 the upper limit? If you expect a response, should you call beginreceive before getting it (if your protocol does not have a problem )?
To use Asynchronization, you must maintain a clear state machine for a given connection. Otherwise, the memory you see will become messy.
Spike.
Memory spike is not equal to memory
Leak. If you remove the connection and release the secondary variable,. NET will automatically release the memory. This is a caller issue, not a Microsoft issue. The reason why C ++ is not a problem is that you must manually release the memory, so the caller naturally thinks this is his responsibility.
In short, when using asynchronous socket, memory spike is a place to pay attention to. In such a system, you still need to manually control the resources that cannot be effectively released.