A scalable, fully asynchronous C/S architecture based on QT Server implementation (II) network transmission

Source: Internet
Author: User
Tags emit

Second, the network transmission module

Module corresponding Code namespace (namespace Zpnetwork)

Module corresponding Code storage directory (\zoompipeline_funcsvr\network)

2.1 Module Structure


The network transport module manages the listener and, based on the current load of each transmission thread, directs the incoming client socket descriptive descriptor to the spare transport thread to run the accept connection operation. The module consists of several classes, such as the following.



1, Zp_net_engine class, derived from Qobject. The external interface class for the module. Also a function manager at the same time. Provides the ability to set up listeners and configure the thread pool.

2, Zp_netlistenthread class: derived from Qobject. Used to bind in the event loops of each listener thread, accepting client connection requests continuously. The class will pump out the socket Description Descriptor (Socketdescriptor) in the signal, load balance by the Zp_net_engine class, and select the transmission thread (zp_nettransthread) with the least current load to accept the access request.



3, Zp_nettransthread class: derived from Qobject. Used to bind in the event loops of each transport thread, with detailed commitment to transmit data. A zp_nettransthread thread can take on multiple client requests for sending and receiving.



4, Zp_tcpserver class: derived from Qtcpserver.

Overload the zp_tcpserver::incomingconnection, not in the listening thread for the accept operation, but the direct issue of the evt_newclientarrived signal, Pump out the Socket Description Descriptor (socketdescriptor). Load balancing is performed by the Zp_net_engine class, and the transmission thread (zp_nettransthread) with the least current load is selected to accept the access request.



The partnership diagram for these four classes is for example the following

watermark/2/text/ahr0cdovl2jsb2cuy3nkbi5uzxqvz29szgvuagf3a2luzw==/font/5a6l5l2t/fontsize/400/fill/i0jbqkfcma== /dissolve/70/gravity/center ">

2.2 System Principles

In order to provide a TCP service based on the thread pool. The Zp_net_engine class has several important members. Following, in accordance with a client initiated the connection process, the reverse of each to introduce the cooperative principle of these classes.

2.2.1 Listener and Listener thread

1. Listener Zp_tcpserver
When the system executes. The listener is responsible for the Qtcpserver derived class, called Zp_tcpserver. This class overloads the Qtcpserver Incomingconnection () Method 1. This function is called immediately when a client in the network initiates a connection. In this derived class. does not directly produce sockets. It only triggers a signal called "evt_newclientarrived" 2. This signal pumps the socket Description descriptor to the recipient, which is used to create sockets in other threads. The process is described in section 2.2.2.

2. Listener Thread Object Zp_netlistenthread
An instance of the Zp_tcpserver class is manipulated by a pointer m_tcpserver in the Zp_netlistenthread class.

M_tcpserver is a pointer to an instance of the Zp_tcpserve class (see ZP_NETLISTENTHREAD.H).

The instance is created in Zp_netlistenthread::startlisten (). Startlisten is a key function. The Zp_tcpserver object was created. The core code is as follows:

M_tcpserver = new Zp_tcpserver (this); Connect (m_tcpserver,&zp_tcpserver::evt_newclientarrived,this,&zp_ Netlistenthread::evt_newclientarrived,qt::queuedconnection);
In the above two lines of code, the first line creates a listening service. The second line. Connect the evt_newclientarrived event of the listening service directly to the event with the same name as Zp_netlistenthread.


3, the Operation Listener module interface class Zp_net_engine
The Zp_netlistenthread class itself is derived from qobject. Instead of a thread object, it is "bound" to execute in a thread object.

A process can have several listening ports, which correspond to different Zp_netlistenthread objects. These listener thread objects are managed by the Zp_net_engine class and stored in the member variables of this class. The following two member variables

This map stores Listenthreadobjectsqmap<qstring,zp_netlistenthread *> m_map_netlistenthreads;//internal Threads to hold each listenthreadobjects ' message queueqmap<qstring,qthread *> m_map_netinternallistenthreads;
The first thread object that stores each port, and the second one that stores each port.


Because the thread that detailed the listener task is the main threads (UI), the thread that runs the task is a worker thread, so all instructions are not implemented by direct function calls. Instead, use QT signals and slots. Example. UIButton is clicked. The Startlisten signal is triggered. In turn, the Zp_netlistenthread's Startlisten Groove responds.

It is important to note that the QT signal and slot system is a broadcast system. means when a Zp_net_engine class manages multiple Zp_netlistenthread objects. Zp_net_engine signals are received by all Zp_netlistenthread objects. Therefore, the signal and the slot contain a unique indicator, which indicates that this signal is triggered in order to manipulate which object in detail. Such techniques are used many times in similar situations.

void Zp_net_engine::addlisteningaddress (QString id,const qhostaddress & address, quint16 nport,bool bsslconn/*= tru e*/) {if (M_map_netlistenthreads.find (ID) ==m_map_netlistenthreads.end ()) {//start Threadqthread * pThread = new QThread (this); Zp_netlistenthread * plistenobj = new Zp_netlistenthread (id,address,nport,bsslconn);p Thread->start ();//m_ Mutex_listen.lock (); M_map_netinternallistenthreads[id] = Pthread;m_map_netlistenthreads[id] = PListenObj;//m_mutex _listen.unlock ();//bind Object to New threadconnect (this,&zp_net_engine::startlisten,plistenobj,&zp_ netlistenthread::startlisten,qt::queuedconnection); Connect (This,&zp_net_engine::stoplisten,plistenobj, &zp_netlistenthread::stoplisten,qt::queuedconnection); Connect (plistenobj,&zp_netlistenthread::evt_ message,this,&zp_net_engine::evt_message,qt::queuedconnection); Connect (plistenobj,&zp_netlistenthread: : evt_listenclosed,this,&zp_net_engine::on_listenclosed,qt::queuedconnection); Connect (PLISTENOBJ,&AMP;ZP_NETLISTENTHREAD::EVT_NEWCLIENTARRIVED,THIS,&AMP;ZP_NET_ENGINE::ON_NEW_ARRIVED_CLIENT,QT:: queuedconnection);p Listenobj->movetothread (pThread);//start Listen immediatelyemit startlisten (ID);} Elseemit Evt_message (This, "warning>" +qstring (tr ("The This ID has been used."));}


2.2.2 Accept the connection process

After the client initiates the access request, the Zp_tcpserver Incomingconnection method is triggered first.

In the following method, the descriptive descriptor of the socket is pumped out as the parameter of the event.


void Zp_tcpserver::incomingconnection (Qintptr socketdescriptor) {emit evt_newclientarrived (socketDescriptor);}
The corresponding slot for the above signal is the zp_net_engine::on_new_arrived_client slot function. In this function, the network module first determines the spare thread from the currently available transport thread, and then forwards the socket description descriptor to the transport thread. The core code for this section:
void Zp_net_engine::on_new_arrived_client (Qintptr socketdescriptor) {zp_netlistenthread * PSource = qobject_cast< Zp_netlistenthread *> (Sender ()); if (!psource) {emit evt_message (this, "warning>" +qstring (tr ("Non-zp_ Netlistenthread type detected. ")); return;} Emit Evt_message (This, "info>" + QString (TR ("Incomming client Arriverd.")); int nsz = M_vec_nettransthreads.size (), int nminpay = 0x7fffffff;int Nminidx = -1;for (int i=0;i<nsz && nminpay! =0;i++) {if (m_vec_nettransthreads[i]->isactive () ==false | | M_vec_nettransthreads[i]->sslconnection ()!=psource->bsslconn ()) Continue;int nPat = M_vec_NetTransThreads[i] ->currentclients (); if (npat<nminpay) {Nminpay = Npat;nminidx = i;} Qdebug () <<i<< "<<nPat<<" "&LT;&LT;NMINIDX; ... if (nminidx>=0 && Nminidx<nsz) emit evt_establishconnection (M_vec_nettransthreads[nminidx], Socketdescriptor); Else{emit Evt_message (This, "warning>" +qstring (tr ("Need Trans-Thread Object for clients."));}} 
In the above code, the Evt_establishconnection event carries the thread of acceptance, the Socketdescriptor descriptive descriptor, as determined by the equilibrium policy.

This event is broadcast to all transport thread objects. In the Incomingconnection slot for each object, the socket object for the transfer is generated in detail. Note that this slot function is executed in the event loop of each transport thread, so the socket created is directly part of a particular thread.

/** * @brief This slot dealing with Multi-thread client socket accept. * Accepy works start from Zp_netlistenthread::m_tcpserver, end with this method. * The Socketdescriptor is delivered from Zp_netlistenthread (a Listening thread) * to Zp_net_engine (normally in Main-gui t Hread), and then Zp_nettransthread. * * @param threadid if ThreadID is not equal to this object, the this message is just omitted. * @param socketdescriptor Socketdescriptor for incomming client. */void zp_nettransthread::incomingconnection (Qobject * threadid,qintptr socketdescriptor) {if (threadid!=this) return ; Qtcpsocket * sock_client = 0;if (m_bsslconnection) sock_client = new Qsslsocket (this); elsesock_client = new QTcpSocket (th IS), if (sock_client) {//initial contentif (True ==sock_client->setsocketdescriptor (Socketdescriptor)) {Connect ( Sock_client, &qtcpsocket::readyread,this, &zp_nettransthread::new_data_recieved,qt::queuedconnection); Connect (sock_client, &qtcpsocket::d isconnected,this,&zp_nettransthread::client_closed,qt::queuedconnection); Connect (sock_client, SIGNAL (Error (QABSTRACTSOCKET::SOCKETERROR)), this , SLOT (DisplayError (Qabstractsocket::socketerror)), qt::queuedconnection), connect (sock_client, &qtcpsocket:: Byteswritten, this, &zp_nettransthread::some_data_sended,qt::queuedconnection); M_mutex_protect.lock (); m_ Clientlist[sock_client] = 0;m_mutex_protect.unlock (); if (m_bsslconnection) {Qsslsocket * Psslsock = qobject_cast< Qsslsocket *> (sock_client); assert (Psslsock!=null); QString Strcerpath = Qcoreapplication::applicationdirpath () + "/SVR_CERT.PEM"; QString Strpkpath = Qcoreapplication::applicationdirpath () + "/svr_privkey.pem";p sslsock->setlocalcertificate ( Strcerpath);p Sslsock->setprivatekey (Strpkpath); Connect (Psslsock, &qsslsocket::encrypted,this, &zp_ nettransthread::on_encrypted,qt::queuedconnection);p sslsock->startserverencryption ();} Emit evt_newclientconnected (sock_client); emit evt_message (Sock_client, "info>" + QString (TR ("Client Accepted.")));} Elsesock_client->deletelater ();}}

2.2.3 Data Reception

After the socket has been successfully created, the sending and receiving of the data is executed in the transport thread. When the socket receives data, a simple trigger event

Evt_data_recieved

void Zp_nettransthread::new_data_recieved () {Qtcpsocket * psock = qobject_cast<qtcpsocket*> (Sender ()); if (PSock {Qbytearray array = Psock->readall (); int sz = Array.size (); G_mutex_sta.lock (); g_bytesrecieved +=sz;g_secrecieved + = Sz;g_mutex_sta.unlock (); Emit evt_data_recieved (Psock,array);}}

2.2.4 Data transmission

Although the QT socket itself has a cache, the size of the data will succeed, but this implementation still uses the additional queue, each cache a fixed-length fragment and sent sequentially. This advantage is the ability to give code users an opportunity to increase the size of the code check buffer and to do some persistent work. For example, when the queue exceeds 100MB, the data is cached on disk instead of being kept in memory.

The variable that implements this policy is two caches.

Sending buffer, hold bytearraies.qmap<qobject *,qlist<qbytearray> > m_buffer_sending; Qmap<qobject *,qlist<qint64> > M_buffer_sending_offset;

The first cache stores the queues for each socket. There is also a send offset that stores individual data blocks. This is a performance flaw, and a better approach is to derive your own classes from Qtcpsocket and store the caches of each socket directly in the derived class instance. In this implementation, the Qtcpsocket and Qsslsocket classes are used directly, so there is a certain performance loss.

A slot method Senddatatoclient is responsible for accepting requests to send data.

void Zp_nettransthread::senddatatoclient (Qobject * Objclient,qbytearray   dtarray) {m_mutex_protect.lock (); if (m_ Clientlist.find (objclient) ==m_clientlist.end ()) {m_mutex_protect.unlock (); return;} M_mutex_protect.unlock (); Qtcpsocket * Psock = qobject_cast<qtcpsocket*> (objclient); if (Psock&&dtarray.size ()) {QList< Qbytearray> & list_sock_data = M_buffer_sending[psock]; Qlist<qint64> & list_offset = M_buffer_sending_offset[psock];if (List_sock_data.empty () ==true) {Qint64 Byteswritten = Psock->write (Dtarray.constdata (), Qmin (Dtarray.size (), m_npayload)); if (Byteswritten < Dtarray.size ()) {list_sock_data.push_back (Dtarray); List_offset.push_back (Byteswritten);}} Else{list_sock_data.push_back (Dtarray); list_offset.push_back (0);}}

In the above function, the queue is checked for null. If empty, the Qtcpsocket::write method is fired to emit a m_npayload-sized chunk of data. When these data blocks are sent, the Qtcpsocket::byteswritten event is triggered, Responds by the following slots.

/** * @brief This slot is called when internal socket successfully * sent some data. In this method, Zp_nettransthread object would check * The Sending-queue, and send more data to buffer. * * @param wsended */void zp_nettransthread::some_data_sended (Qint64 wsended) {g_mutex_sta.lock (); G_bytessent + = Wsended;g_secsent + = Wsended;g_mutex_sta.unlock (); Qtcpsocket * psock = qobject_cast<qtcpsocket*> (Sender ()), if (Psock) {emit evt_data_transferred (psock,wsended); Qlist<qbytearray> & list_sock_data = M_buffer_sending[psock]; Qlist<qint64> & list_offset = M_buffer_sending_offset[psock];while (List_sock_data.empty () ==false) { Qbytearray & arraysending = *list_sock_data.begin (); Qint64 & currentoffset = *list_offset.begin (); Qint64 Ntotalbytes = Arraysending.size (); assert (Ntotalbytes>=currentoffset); Qint64 Nbyteswritten = Psock->write ( Arraysending.constdata () +currentoffset,qmin ((int) (ntotalbytes-currentoffset), m_npayload)); CurrentOffset + = Nbyteswritten;if (CURrentoffset>=ntotalbytes) {List_offset.pop_front (); List_sock_data.pop_front ();} Elsebreak;}}}

2.2.5 Other Jobs

After the transfer is terminated, a certain cleanup is performed. For multi-threaded transmissions, the most important thing is to ensure the lifetime of each object. Interested readers can use SHAREDPTR to manage dynamically assigned objects, which can be very handy to operate. In this example, all the code is being debugged.

In the next chapter, we will introduce the principle and implementation of pipelining thread pool.

A scalable, fully asynchronous C/S architecture based on QT Server implementation (II) network transmission

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.