Design of cross-platform network library

Source: Internet
Author: User
Tags epoll
Recently in the development of a Cross-platform network library, the purpose is to encapsulate the bottom of the network details, provide a simple interface, here to record the design ideas.     Network game server usually requires 2 kinds of network IO: One is the network IO with high connection number for client connection, one is low connection number between servers, high throughput network IO, the latter one is simpler, and can be blocked or asynchronous network IO model. The first type of network model that supports high concurrency, such as iocp,epoll, is usually required. Development Goal 1: Encapsulate network low-level details and complexity 2: provide a simple and flexible interface 3: At least a few key designs can be used under Windows and Linux 1: Cross-platform     to meet cross-platform requirements, consider the following 3 ways     A: Write 2 sets of systems, provide a consistent interface, use #ifdef and other precompiled instructions to identify the operating system     B: Under the ACE Framework for development     C: In the boost ASIO library for development     after consideration, I adopted the 3rd option, because the first project development cost is too high, in the design, coding and debugging need to invest more time, and Ace too large, abstract level is too high, learning curve high, and difficult to debug and maintenance, and boost ASIO just right, the entire library source code is more than 10,000 lines, combined with other boost libraries, save most of the cross-platform work. (Ps. ASIO is a lightweight, high-quality C + + library, introduced by boost 1.35, encapsulates network models such as Iocp,epoll,kqueue, and provides a consistent proactor pattern) 2: Thread     relative to game logic, The network layer has been very easy to implement multithreading, there is no special consideration, the use of traditional resources to lock the way. Both send and receive data are managed by thread pooling. 3: Memory management     mainly reflected in the buffer design above, I first thought about this problem, quickly wrote a Var_buffer class, the function of the class like std::vector and other containers can dynamically grow, And in a certain amount of time after the unused memory recovery, and then changed to a ring buffer, also support dynamic growth and memory recovery, here is a small episode, when the idea of using a ring buffer, the first thought is boost::circluar_buffer, but when you open boost's document, was overthrown in less than 1 minutes: Boost::ciThe Rcluar_buffer is designed as a generic container, and if the boost::circluar_buffer<char&gt is used, each memory copy is pushed in a loop, which has a significant effect on efficiency. 4: Send the Strategy     for the network library, receive is a natural thing, and send will be troublesome. After careful thought, my forwarding strategy is as follows: First step: Store data--the client program calls the network library send interface, directly transfer data to the session of the send buffer, and then notify the sending thread has data need to send, send function (in the calling thread) back to the second step: processing Data-- The sending thread then takes out the data, encrypts it, compresses it and puts it into another buffer B1. Here, such as encryption compression needs to consume a certain amount of CPU resources operations cut into the sending thread pool processing. Step three: Send data-If the session does not currently have data to send, send the request, send the data in the buffer B1     here are 2 questions:     1 If you are in the process of sending the B1 buffer, New send request, how to deal with     2 B1 buffer in the sending process, how to maintain the validity of B1, and how to send data after the completion of the buffer     for the first question, Adopted a small tip: the introduction of buffer B2, and B1,B2 formed a only 2 elements of the queue, in the sending of data, the first flush queue pops out of a buffer, and then send, if in the process of sending and please send the request, then write to the Queue.front () , such data is always a buffer that is pressed into the Queue.front (). When the B1 is sent, the B1 is pushed into the queue, which creates a loop.     for the 2nd question, in order to ensure the validity of the buffer, a smart pointer management buffer with reference count is used to pass the smart pointer as a parameter when sending the request. After the sending process is completed, push to the queue above, so that the buffer will only be removed after the connection is disconnected, resulting in recycling. 5: Data Reception policy     data reception policy takes a commonly used two read policy, first receives a fixed length header, then resolves the packet length information from the header. Post a recv request to read the body. In addition, all network responses that need to be fed back to the user are encapsulated into messages (including new connection creation,Disconnect, and so on, serialized to Message Queuing, where the user of the library invokes a Handle_event API to fetch a message from the Queue 6: Server Active Disconnect     Server active disconnect, to do the following several guarantees:      1 The remaining data cached in the network library is sent to the client after the interface function disconnect returns and no more messages are received for the connection     2. 7: Baotou structure     Baotou needs to contain the following information:     1: Package length     2: Package serial number     3: Encrypted information- Usually key     4: Compressed information     5: Check Information--prevent package from being tampered with 8: binary interface Design     combine the project situation, consider providing 2 sets of interfaces, A set of interfaces for C + + programmers, for reference to some of the design of COM, also provides a set of C API, the purpose is to use script calls.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.