Cross-platform network library design

Source: Internet
Author: User
Tags epoll
A cross-platform network library is being developed recently. It aims to encapsulate the details of the underlying network and provide a simple interface for external users. Here we will record the design ideas. Network Game servers usually require two types of network I/O: one is network I/O for a large number of client connections with high connections, and the other is low connections between servers with high throughput, the latter type of network I/O is relatively simple and can adopt a blocking or Asynchronous Network I/O model. The first type usually requires a network model that supports high concurrency, such as iocp and epoll. Development Goal 1: encapsulate the details and complexity of the underlying network 2: provide simple and flexible interfaces 3: at least several key designs can be used in Windows and Linux 1: to meet cross-platform requirements, you can consider the following three methods: Write two sets of systems, provide consistent interfaces, and use pre-compiled commands such as # ifdef to identify operating system B: c development under the ACE framework: After considering the development in the boost ASIO library, I HAVE ADOPTED 3rd solutions because the development cost of the first solution is too high, it takes a lot of time for coding and debugging. Ace is too large, abstract layers are too high, learning curves are high, and debugging and maintenance are difficult. Boost ASIO is suitable, the source code of the entire library is more than 10 thousand lines. Combined with other boost libraries, this saves most of the cross-platform work. (PS. ASIO is a lightweight, high-quality C ++ library introduced by boost 1.35. It encapsulates network models such as iocp, epoll, and kqueue, and provides consistent proactor modes.) 2: compared with the game logic, the network layer is very easy to implement using multiple threads. There is no special consideration here, using the traditional resource locking method. Data sending and receiving are managed by the thread pool. 3: memory management is mainly reflected in the design of the buffer zone. When I first thought about this problem, I quickly compiled a var_buffer class, which features similar to STD :: containers such as vector can grow dynamically, and unused memory can be recovered after being used for a certain period of time. Later, they are changed to a circular buffer, which also supports dynamic growth and memory recovery. Here is an episode, when we think of using a circular buffer, we first think of Boost: circluar_buffer. But when we open the boost documentation, it will be overturned in less than one minute: boost :: circluar_buffer is designed as a generic container. If boost: circluar_buffer <char> is used, every memory copy is pushed cyclically, which has a great impact on efficiency. 4: The sending policy is natural for Network Libraries, and sending requires a little trouble. After careful consideration, my sending policy is as follows: Step 1: store data -- when the client program calls the send interface of the network library, the data is directly transmitted to the sending buffer zone of the session, then inform the sending thread that data needs to be sent. The send function (in the calling thread) returns Step 2: process data -- then the sending thread extracts data for encryption, compress the data into another buffer zone B1. Here, encryption, compression, and other operations that consume certain CPU resources are switched to the sending thread pool for processing. Step 3: send data -- if no data is sent for this session, the shipping sends the request. There are two problems in sending the data in buffer B1: 1) if the data is being sent in the buffer zone B1, how to handle new requests? 2) How does the B1 buffer maintain the validity of B1 during the sending process and how to delete the buffer after the data is sent, A trick is adopted: Introduce the buffer B2, and B1 and B2 form a queue with only two elements. When sending data, pop a buffer in the queue first, before sending. If a request is sent during the sending process, it is written to the queue. in Front (), the data is always pushed to the queue. front () Buffer. After sending B1, push B1 to the queue to form a loop. To ensure the effectiveness of the buffer, a smart pointer with reference count is used to manage the buffer. When sending a request, the smart pointer is passed as a parameter, after the sending process is complete, push it to the preceding queue, so that the buffer will be cleared only after the connection is disconnected, forming a loop of exploitation. 5: Data receiving policy the data receiving policy adopts two common read policies. First, a fixed-length header is received and the packet length information is parsed from the header, resend a Recv request to read the body. in addition, all network responses that need to be fed back to the user are encapsulated into messages (including new connection establishment, disconnection, and other events) and serialized into the message queue, the database user calls a handle_event API to retrieve a message from the queue. 6. The server proactively disconnects the connection to the server and actively disconnects the connection. Make the following guarantees: 1) After the interface function disconnect returns, no longer receive the connection message 2) the remaining data cached in the network library will be sent to the client. 7: the packet header structure must contain the following information: 1: Packet Length 2: Packet No. 3: encryption information-usually key 4: compression Information 5: verification information-prevent packets from being changed to 8: the binary interface design, combined with the project situation, provides two sets of interfaces, one set of interfaces for C ++ programmers, draws on some com designs, and one set of C APIs, the purpose is to use a script for calling.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.