Graphic Code of C # network programming

Source: Internet
Author: User
In today's software development, network programming is a very important part, this article briefly introduces the concept of network programming and practice, the need for friends can refer to the following

Read the catalogue:

Basis
Socket programming
Multithreading concurrency
Blocking synchronous IO

Basis
In today's software development, network programming is a very important part, this article briefly introduces the concept and practice of network programming.
Socket is a network programming interface, which is a layer of TCP, UDP communication protocol to the Transport layer of the package, through the friendly API exposed to facilitate network communication between the process or more than one machine.

Socket programming

In network programming, the client and server roles, for example, by opening a browser to the Web page hanging on the software, from a procedural point of view, that is, the client (browser) initiated a socket request to the server side, the servers to return the content of the Web browser to display after parsing. Before the client and server data communication, there will be three confirmations to formally establish the connection, that is, three times handshake.

    1. The client sends a message asking if the server is ready.

    2. Service-side response I'm ready, you ready?

    3. Client Response Server I'm ready to communicate.

TCP/IP protocol is the basic Protocol for inter-network communication, and the usage of socket interface exposed in different programming languages and different operating systems is similar, but its internal implementation is different, such as Epoll under Linux and IOCP under Windows.

Service side

    • Instantiate a socket

    • Bind the public address port on the operating system

    • Start listening on the bound port

    • Waiting for client connections

IPEndPoint IP = new IPEndPoint (Ipaddress.any, 6389);      Socket listensocket = new socket (IP. AddressFamily, SocketType.Stream, protocoltype.tcp);      Listensocket.bind (IP);      Listensocket.listen (+);      Listensocket.accept ();

The Listen function has an int type parameter that represents the maximum number of pending connections, representing the number of connections that have been established but not yet processed, and each call to the Accept function takes a connection from the waiting queue. Usually the server is going to serve multiple client requests, so it loops the connection from the waiting queue and sends the receive.

while (true)       {         var accept= listensocket.accept ();        Accept. Receive ();         Accept. Send ();       }

Multithreading concurrency
The above service-side program handles both receiving and sending messages under the current thread, which means that a client connection is processed before the next connection is processed, and if the current connection is an IO operation such as a database or file read write, it can greatly waste the CPU resources of the server and reduce the server throughput.

while (true)      {        var accept = listensocket.accept ();        ThreadPool.QueueUserWorkItem ((obj) =        {          byte[] receive = new byte[100];          Accept. Receive (receive);          byte[] send = new byte[100];          Accept. Send (receive);        });      

In the example, when a new connection request is heard, the call to accept () takes out the current connected socket and uses the new thread to process the receive and send messages so that the server can implement concurrent processing of multiple clients. In the above code, in the high concurrency is actually problematic, if the client connection request thousands of, that the number of threads will have so many, each thread of the stack space needs to consume some memory, and the thread context switch, it is easy to cause the server load too much, the throughput is greatly reduced, serious will cause downtime. In the current example, using System ThreadPool, the number of threads is fixed to a number, the default is 1000, there is no unrestricted thread, and the request to handle the number of threads exceeds the queue in the thread pool.
There are 2 similar implementations under UNIX:

Fork a new process to handle the client connection:

var connfd = Accept (LISTENFD, (struct sockaddr *) &cliaddr,&cliaddr_len); var m = fork (); if (M = = 0) {//do something}

Create a new thread-handling current limit:

var *clientsockfd = Accept (SERVERSOCKFD, (struct sockaddr *) &clientaddress, (socklent *) &clientlen); if (Pthreadcreate (&thread, NULL, Recdata, CLIENTSOCKFD)!=0) {//do something}

Blocking synchronous IO
The model used in the above example is simple and convenient to use.

while (true)      {        var accept = listensocket.accept ();        Byte[] receive = new byte[100];        Accept. Receive (receive);        byte[] send = new byte[100];        Accept. Send (receive);      }

From the time the receive function is called to the data that is received by the client, the function blocks and waits, and the process flow is as follows:

    1. Client sends data

    2. Sent over a WAN LAN to the service-side machine adapter buffer

    3. NIC driver sends interrupt instruction to CPU

    4. The CPU copies the data to the kernel buffer

    5. The CPU then copies the data from the kernel buffers to the user buffer, above the receive byte array.

At this point the processing succeeds and the next connection request is processed. The calling send function will also block the current, and then copy the user buffer (send byte array) data into the TCP send buffer in the kernel. The send buffer for TCP also has a certain size limit, and if the data sent is greater than that limit, the Send function waits for the send buffer to be fully copied when it is idle and to continue processing subsequent connection requests.

Asynchronous IO
The previous article mentioned that multi-threaded processing multiple blocking synchronous IO to implement the concurrent service side, this mode is very suitable when the number of connections is small, once the connection too much, performance will fall sharply. In most server-side networking software, an asynchronous IO approach is used to improve performance.

Synchronous IO Mode: Connection to receive request, wait-to-wait, receive success
Asynchronous IO Mode: Return an event or callback notification immediately, connect receive request
Asynchronous IO means that a single thread can handle multiple requests, and after a connection initiates a receive request, the current threads can do something else immediately when the data is received and the thread is notified of the processing.
The data received is divided into 2 parts:

Data sends kernel buffers from other machines
Kernel buffers copied to user buffers
The second part of the sample code:

byte[] msg = new BYTE[256]; Socket. Receive (msg);

The purpose of these 2 parts is to facilitate the differentiation of several other ways. For a user program, the difference between synchronous IO and asynchronous IO is whether the second part needs to wait.

Non-blocking synchronous IO
Non-blocking synchronous IO, extended by synchronous io, splits the noun into 2 sections:

    • Non-blocking, which refers to the "data from another machine sending kernel buffer" section of the previous section is non-blocking.

    • Synchronous IO, which refers to the "kernel buffer copy to user buffer" section of the previous section is waiting.

Since the first part is non-blocking, you need a way to know when the kernel buffer is OK. When non-blocking mode is set, a token is returned immediately when the connection calls the Receive method, informing the user that the kernel cache area has no data, and if there is data to begin the second part, copy from the kernel buffer to the user program buffer. Because the system returns a token, it can be polled to determine if the kernel buffer is OK.

To set the non-blocking mode reference code:

SocketInformation sif=new socketinformation (); Sif. Options=socketinformationoptions.nonblocking;sif. Protocolinformation = new BYTE[24]; Socket socket = new socket (SIF);

Polling reference code:

while (true) {byte[] msg = new Byte[256];var temp = socket. Receive (msg), if (temp== "OK") {//do something}else{continue}}

This approach is almost obsolete, and understanding can be.

Callback-based Asynchronous IO
Described above:

Asynchronous IO Mode: Return an event or callback notification immediately, connect receive request
When the callback to execution, the data is already in the user program buffer is ready, in the callback code for this part of the data corresponding logic.

To make a receive request:

Static byte[] msg = new BYTE[256]; var temp = socket. BeginReceive (msg, 0, Msg. Length, 0, New AsyncCallback (Readcallback), socket);

The data is processed in the callback function:

public static void Readcallback (IAsyncResult ar) {var socket = (socket) ar. asyncstate; int read = socket. EndReceive (AR);D osomething (msg); Socket. BeginReceive (msg, 0, Msg. Length, 0, New AsyncCallback (Read_callback), socket);}

When the callback function executes, it indicates that the data is ready and needs to end the receive request endreceive for the second time to make a receive request. In the service-side program to handle the reception of multiple clients, again issued beginreceive receive data requests.

The callback function here is triggered on another thread, and it is necessary to lock the data to prevent the data from competing:

Console.WriteLine (THREAD.CURRENTTHREAD.MANAGEDTHREADID);
Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.