Boost.asio C + + Network programming Translator (25)

Source: Internet
Author: User

Agent ImplementationThe agent is typically located between the client and the server. It accepts requests from the client, may modify the request, and then sends the request to the server. The results are then retrieved from the server, and the results can be modified, and then the results are sent to the client. What's special about the agent, we're talking about it: For each connection, you need two sokect, one to the client and one to the server. All this adds a little difficulty to implementing an agent. Implementing a synchronous proxy application is more complex than asynchronous, and the data can come from both sides (client and server), or at the same time, to two-side. This means that if we choose to synchronize, we may be able to read () or write () at one end to the other, while the other end is blocking at the end of read () or write (), which means we will eventually become unresponsive. consider the following as a simple example of an asynchronous proxy:
    • In our case, we can get two connections in the constructor. But not all of this, for example, for a Web proxy, the client only tells us the address of the server.
    • Because it is relatively simple, it is not thread-safe. Refer to the following code:
Class proxy  : Public boost::enable_shared_from_this<proxy> {    Proxy (ip::tcp::endpoint ep_client, IP:: Tcp::endpoint Ep_
Server): ..... {}public:
    Static ptr start (Ip::tcp::endpoint ep_client,ip::tcp::endpoint ep_svr) {
        PTR New_ (new proxy (ep_client, ep_svr));        ... Connect to two end        return new_;
    }    void Stop () {
        // ... Close two connections    }
    BOOL Started () {return started_ = = 2;} Private
    void On_connect (const Error_code & err) {        if (!err)      {
            if (++started_ = = 2) On_Start ();        } else Stop ();
    }    void On_Start () {
        Do_read (Client_, Buff_client_);
        Do_read (Server_, Buff_server_);    }

... private:

Ip::tcp::socket Client_, Server_;

    enum {max_msg = 1024x768};    Char buff_client_[max_msg], buff_server_[max_msg];

int started_;

};

This is a very simple proxy. When all of our two ends are connected, it starts reading from both sides (On_Start () method):
   Class proxy  : Public boost::enable_shared_from_this<proxy> {       ...
       void On_read (Ip::tcp::socket & sock, const error_code& err, size_t   bytes) {
           char * buff = &sock = = &client_? Buff_client_: Buff_server_;
           Do_write (&sock = = &client_ Server_: client_, buff, bytes);       }
       void On_write (Ip::tcp::socket & sock, const Error_code &err,   size_t bytes) {
           if (&sock = = &client_) Do_read (Server_, Buff_server_);
           else                    Do_read (Client_, Buff_client_);       }
       void Do_read (Ip::tcp::socket & sock, char* buff) {           async_read (sock, buffer (buff, max_msg),
                      Mem_fn3 (Read_complete,ref (sock), _1,_2),                      mem_fn3 (On_read,ref (sock), _1,_2));
       }       void Do_write (Ip::tcp::socket & sock, char * buff, size_t size) {
           Sock.async_write_some (Buffer (buff,size),                                 mem_fn3 (On_write,ref (sock), _1,_2));
       }       size_t Read_complete (Ip::tcp::socket & Sock,
                            Const Error_code & Err, size_t bytes) {           if (sock.available () > 0) return sock.available ();
           Return bytes > 0? 0:1;       }

};

for each successful read operation (On_read), it sends the message to another section. As soon as the message is sent successfully (On_write), we read it again from the source section. Use the following code snippet to get this process up and running:
int main (int argc, char* argv[]) {       ip::tcp::endpoint Ep_c (ip::address::from_string ("127.0.0.1"),
   8001);       Ip::tcp::endpoint ep_s (ip::address::from_string ("127.0.0.1"),
   8002);       Proxy::start (Ep_c, ep_s);       Service.run ();

}

you'll notice that I've reused buffer in my reading and writing. This reuse is OK because the message read from the client is written to the server before a new message is read, and vice versa. This also means that this implementation will run into an unresponsive problem. When we are processing the write to Part B, we do not read from a (we will re-read from part A when writing to Part B is complete). You can overcome this problem by rewriting the implementation in the following way:you need to use multiple read bufferfor each successful read operation, an additional read (written to a new buffer) is required, in addition to the asynchronous write back to the other part .for each successful write operation, destroy (or reuse) this bufferI'll leave this as a contact for you.
SummaryThere are a number of things to consider when choosing to synchronize or async. The first thing to consider is to avoid confusing them. in this chapter, we have learned that:
    • How easy it is to implement, test, and debug each type of application
    • How threads affect your app
    • How the applied behavior affects its implementation (pull or push type)
    • How do you embed your own asynchronous operations when you choose Async
Next, we'll look at some of the Boost.asio's less-known features, and my favorite Boost.asio feature-the co-process, which allows you to enjoy the benefits of asynchrony with little or no harm.

Boost.asio C + + Network programming Translator (25)

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.