It took more than a week to test the TCP-based high-concurrency connection network architecture.
Objective: To use multiple threads to distribute network connections, data packet compression/decompression, encryption/decryption, and other time-consuming operations (ASIO does not provide native support for these operations), along with a thread pool framework. Only the appearance of a single thread is exposed to the logic layer of the game, isolating the complexity of the underlying multi-thread.
The structure is shown as follows (if you do not follow any standards, you can check it out ):
Tcpsessionhandler: Class exposed to the logic layer. It is responsible for internal interaction with the tcpsession mounted to a thread through tcpiothreadmanager, shielding the details of multithreading on the upper layer. The statement is as follows:Class tcpsessionhandler: public STD: enable_shared_from_this <tcpsessionhandler>, <br/> Public boost: noncopyable {<br/> public: <br/> // ================================ typedefs ================== ======================================< br/> // ============ ============= lifecycle ==================================== ==============< br/> tcpsessionhandler (); <br/> virtual ~ Tcpsessionhandler () {}< br/> // ============================== operations ================ ============================< br/> // sends message to remote endpoint, the content of message wocould be consumed <br/> void sendmessage (netmessage & message); <br/> // sends message to remote endpoint, the content of message wocould be consumed <br/> void sendmessage (netmessage & message); <br/> // closes the session <br/> void close (); <br/> // true if the session is closed. <br/> bool isclosed () {return kinvalidtcpsessionid = session_id _ ;}< br/> // called when connection complete. <br/> virtual void onconnect () = 0; <br/> // called when netmessage received ed. <br/> virtual void onmessage (const netmessage & message) = 0; <br/> // called when tcpsession closed. <br/> virtual void onclose () = 0; <br/> };
(Note: The Code style in this article should follow Google C ++ style guide whenever possible
)
Netmessagelist: netmessage is a network message with a clear logical division. One or more netmessages form netmessagelist.
Tcpiothreadmanager: manages one or more tcpiothreads. One tcpiothtread runs logically as the main thread.
Commandlist: a command queue for inter-thread interaction. It is also the only synchronization mode between threads in the framework.
Tcpiothread: Io thread, which can also be used as a working thread through commandlist. Each thread uses the io_service of ASIO to process multiple tcpsessions.
Netmessagefilterinterface: network message filter interface (omitted because the interface is too long). It can be customized. Generally, it encapsulates group packages, compression, encryption, and other processes to meet different logic layer protocol requirements.
Tcpsession: the backend network connection, which does not distinguish between the server and client. It is responsible for sending and receiving network data.
There are two outer classes: tcpserver and tcpclient, which can be associated with the same tcpiothreadmanager to meet the needs of a server in a multi-server architecture that is both a TCP server and a client of other servers.
Sample Code:
Int main (INT argc, char ** argv) {<br/> tcpiothreadmanager Manager (1, // thread num <br/> boost: posix_time :: millisec (2); // Sync interval <br/> unsigned short int Port = 20000; <br/> tcpserver server ({boost: ASIO: IP: TCP:: V4 (), Port },< br/> Manager, <br/> & myhandler: Create, <br/> & myfilter: Create ); <br/> manager. run (); <br/> return 0; <br/>}
The client is similar.
Thread Synchronization policy:
The preceding thread synchronization adopts the command mode and uses the C ++ 0x function. Commandlist definition:
Typedef STD: List <STD: function <void ()> commandlist;
Each thread sends commands sent to other threads in batches at intervals, because the list splice is only a few pointer operations, this process can be completed efficiently through the spin lock (psydo code ):
Commandlist thread1.commands _ to_be_sent _; <br/> commandlist thread2.commands _ received _; <br/> // thread1: <br/>{< br/> thread1.commands _ to_be_sent _. push_back (command); <br/> ...; <br/> thread2.spinlock _. lock (); <br/> thread2.commands _ received _. splice (thread2.commands _ received _. end (), <br/> thread1.commands _ to_be_sent _); // connect thread1.commands _ to_be_sent _ to the end of thread2.commands _ received _ <br/> thread2.spin Lock _. unlock (); <br/>}< br/> // thread2: <br/>{< br/> commandlist templist; <br/> thread2.spinlock _. lock (); <br/> templist. swap (thread2.commands _ received _); // switch and process commands_received _ with the temporary queue. <br/> // reduce the lock time <br/> thread2.spinlock _. unlock (); <br/> for (Auto it = templist. begin (); it! = Templist. end (); ++ it) <br/> (* It) (); <br/> templist. clear (); <br/>}< br/>
The transmission from tcpsessionhandler to netmessagelist of tcpsession is implemented through this mechanism, and the thread lock overhead can be almost ignored.
The remaining problem: the variable-length buffer in netmessage is implemented using vector, and dynamic memory allocation may pose a potential efficiency hazard. If confirmed by actual applications, the memory pool can be used, but the efficient multi-threaded memory pool is slightly more complex. Related ideas should be written in another day.