Synchronous socket implementation __.net of. NET Socket Development

Source: Internet
Author: User

Reprinted from Http://dotnet.chinaitlab.com/ASPNET/731870.html

Many people think that the server socket in the network application should not use synchronous sockets. Yes, in most cases it is, but there are also scenarios where we might get more results with a sync socket. We can consider using synchronous sockets for the following two scenarios.

One, the number of clients is less than:

A smaller number refers to the number of clients that will connect to the server at the same time, typically below 50 people. In this case, we can consider using synchronous Socket+thread to implement our service side. This will allow us to write logically clearer code and performance will not degrade too much.

Second, a large number of clients but are short connection:

A short connection is a scene in which a client's connection is disconnected after it has been processed, for example, the HTTP protocol is a short connection. HTTP creates a socket connection when the client makes a request and sends a URL request through the socket, and the server disconnects the connection after processing the request and sending back the appropriate page. In this scenario we can also use synchronous sockets to achieve our needs.

So what if the two requirements I mentioned above should be fulfilled? For both of these requirements, I will implement them in different scenarios.

First of all, let's take a look at the first requirement, which I use Socket+thread to implement, the basic process is as follows:

First create a socket, and bind it to a endpoint and start listening. Next we create a thread in which we use an infinite loop to receive the connection request from the client. After a request is received, a new thread is created for the client, and in this thread an infinite loop is also used to receive data from this client. Let's take a look at the code here:

First we create a socket to listen to the client's connection:

Socket listener = new socket (addressfamily.internetwork, 
SocketType.Stream, protocoltype.tcp);
IPEndPoint locep= New IPEndPoint (Ipaddress.any, Watts);
Listener. Bind (LOCEP);
Listener. Listen (100);

Then create a thread to handle the client's connection request:

Thread acceptthread = new Thread (new ThreadStart (Acceptworkthread));
Acceptthread.start ();
 
private void Acceptworkthread () ...
{
    Thread.CurrentThread.IsBackground = true;
    while (true) ...
    {
        Socket accept = listener. Accept ();
        IPEndPoint Remoep = (ipendpoint) accept. Remoteendpoint;
        String recstring = "received a connection from + remoEP.Address.ToString () +". ";
        This. Invoke New Addlistitemhandler (this. addListItem), new string[] ...
{recstring});
        Thread receivethread = new Thread (new Parameterizedthreadstart
(Receiveworkthread));
        Receivethread.start (accept);
    }

Finally, let's take a look at how to receive data:

private void Receiveworkthread (Object obj) ...
{
    Thread.CurrentThread.IsBackground = true;
    Socket socket = (socket) obj;
    byte[] buffer = new byte[1024];
    while (true) ...
    {
        int receivecount = socket. Receive (buffer);
        if (Receivecount > 0)
        ... {
            IPEndPoint Remoep = (ipendpoint) socket. Remoteendpoint;
            String recstring = "from client" + remoEP.Address.ToString () + "
message: + Encoding.Default.GetString (buffer, 0, Receivecoun t);
            This. Invoke New Addlistitemhandler (this. addListItem), new string[]
 ... {recstring});
            Socket. Send (buffer, receivecount, socketflags.none);
        }
        Else
        ... {
            socket. Close ();
            break;
        }}}

Well, the whole implementation is done.

Now let's take a look at the second requirement:

This scenario will be implemented in a different way, and why not use the previous method to implement it. Let's analyze it. We know that in the previous implementation, each access to a client will create a thread, if there is a large number of client access, will create too many threads. But if there are too many threads, Windows needs more CPU time to switch the thread's context (which is why the previous implementation cannot access many clients).

We know that in this scenario, each connection is a short connection. And the order is fixed. are: Access-> receive-> send in such order, then we can complete the whole process in one method. In this way, we can use the thread pool to achieve what we need. Okay, let's talk in code:

First we create a socket to listen to the client's connection:

Socket listener = new socket (addressfamily.internetwork, SocketType.Stream, 
protocoltype.tcp);
IPEndPoint locep= New IPEndPoint (Ipaddress.any, Watts);
Listener. Bind (LOCEP);
Listener. Listen (100);

Next we're going to create a thread pool:

thread[] clientthreadlist = new THREAD[30];
foreach (Thread th in clientthreadlist) ...
{
    th = new Thread (new ThreadStart (Clientworkthread));
    Th. Start ();
}

Finally, let's look at what the threads are going to do:

private void Clientworkthread () ...
{
    byte[] buffer = new byte[1024];
    while (true) ...
    {
        Socket socket = listener. Accept ();
        String recstring = "received a connection from + remoEP.Address.ToString () +".
";
        This. Invoke New Addlistitemhandler (this. addListItem), new string[] 
... {recstring});
        int rececount = socket. Receive (buffer);
        if (rececount>0) ...
        {
            String recstring = "from client" + remoEP.Address.ToString () + "
message:" + Encoding.Default.GetString (buffer, 0, rece Ivecount);
            This. Invoke New Addlistitemhandler (this. addListItem), new string[] 
... {recstring});
            Socket. Send (buffer, rececount, socketflags.none);
        }
        Socket. Shutdown (Socketshutdown.both);
        Socket. Close ();
    }
}

Why do we have to do this?

First we created a socket to listen for the client's connection request, and then we created a thread pool with 30 threads. Accept, receive, send, and close () are implemented in each thread to complete the connection, reception, sending, and shutdown operations.

Now we assume that a client is connected to the server, and then a thread accept to the request and starts receiving the data sent by the client, receives the data, processes the send to the client, closes the connection, and then goes back to the waiting connection state. The other 29 threads are still processing the waiting access status because they are not accept to this request.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.