Boost.asio C + + Network programming Translator (22)

Source: Internet
Author: User

synchronous I/O in server-side applicationssimilar to the client, the server is also divided into two scenarios to match the previous chapters in case 1 and Case 2. Similarly, the "Send request-read results" policy is used in both cases. The first scenario is the synchronization server that we implemented in the previous section. It is not easy to read a complete request when you are synchronizing, because you need to avoid blocking (usually how much you can read):
void Read_request () {       if (sock_.available ())

}

Already_read_ + = Sock_.read_some (    buffer (buff_ + already_read_, max_msg-already_read_));
as soon as a message is fully read, it is processed and then returned to the client:
void Process_request () {       bool Found_enter = Std::find (buff_, buff_ + already_read_, ' \ n ')
                           < buff_ + Already_read_;
       if (!found_enter)           return;//message isn't full
       size_t pos = Std::find (buff_, buff_ + already_read_, ' \ n ')-   buff_;
       std::string msg (buff_, POS);     ...
       if (Msg.find ("login") = = 0) on_login (msg);
       else if (Msg.find ("ping") = = 0) on_ping ();     else ...

}

if we want to make our service end a push server, we modify it in the following ways:
typedef std::vector<client_ptr> Array;   Array clients;   Array notify;   Std::string notify_msg;
   void On_new_client () {       //on a new client, we notify all clients of this event       notify = clients;       Std::ostringstream msg;       MSG << "Client Count" << clients.size ();       Notify_msg = Msg.str ();       Notify_clients ();
   }   void Notify_clients () {
       for (Array::const_iterator B = Notify.begin (), E = Notify.end ();                                   b! = e; ++B) {
           (*b)->sock_.write_some (notify_msg);       }

}

The on_new_client () method is one of the events that we need to notify the known client. Notify_clients is a notificationAlla method that is interested in a client for an event. It sends a message but does not wait for the results returned by each client, because that would result in blocking. When the client returns a result, the client tells us why it replied (and then we can handle it correctly).
synchronizing threads in the service sideThis is a very important point of concern: how many threads do we open to handle server requests? for a synchronization server, we need at least one thread to handle the new connection:
void Accept_thread () {       ip::tcp::acceptor acceptor (service, Ip::tcp::endpoint (ip::tcp:
   : V4 (), 8001));       while (true) {
           Client_ptr New_ (new talk_to_client);           Acceptor.accept (New_->sock ());           Boost::recursive_mutex::scoped_lock LK (CS);           Clients.push_back (New_);

} }

for clients that already exist:
    • We can be a single thread. This is the simplest and also the implementation that I used in the fourth Chapter synchronization server. It can easily handle 100-200 of concurrent clients and sometimes more, which is sufficient for most cases.
    • We can open a thread for each client. This is not a good choice; he wastes multithreading and sometimes makes debugging difficult, and when it needs to handle more than 200 concurrent clients, it may soon reach its bottleneck.
    • We can use a fixed number of threads to handle existing clients.
The third option is the most difficult to implement in the synchronization server, and the entire Talk_to_client class needs to be thread-safe. Then, you need a mechanism to determine which thread handles which client. For this question, you have two options:
    • Assign a specific client to a particular thread, for example, thread 1 handles the first 20 clients, thread 2 handles 21 to 40 threads, and so on. When a thread is in use (something we are waiting to block on the client), we take it out of the list of existing clients. Once we're done, put it back in the list. Each thread iterates through the list of clients that already exist and then raises the first client with the full request (we have read a complete message from the client) and then replies to it.
    • The server may become unresponsive
    1. In the first case, several clients that are handled by the same thread send the request at the same time, because a thread can only process a single request at the same moment. So we can't do anything in this situation.
    2. In the second case, if we find that the concurrent request is larger than the current number of threads. We can simply create new threads to handle the current stress.
The following code snippet is somewhat similar to the previous Answer_to_client method, which shows us how the second method is implemented:
    1. struct talk_to_client:boost::enable_shared_from_this<talk_to_client>   {
             ...       void Answer_to_client () {
                 try {               read_request ();
                     Process_request ();           } catch (boost::system::system_error&) {

      Stop (); }

      } };

We need to modify it to make it look like the following code snippet:
  1. struct talk_to_client:boost::enable_shared_from_this<talk_to_client>   {
           Boost::recursive_mutex CS;       Boost::recursive_mutex Cs_ask;       BOOL in_process;       void Answer_to_client () {
               {Boost::recursive_mutex::scoped_lock lk (cs_ask);             if (in_process)
                     return;             In_process = true;
               }           {Boost::recursive_mutex::scoped_lock lk (CS);           try {
                   Read_request ();               Process_request (); }
    catch (boost::system::system_error&) {
                   Stop ();
               }           }           {Boost::recursive_mutex::scoped_lock lk (cs_ask);
                 In_process = false;           }

    } };

when we process a client, its in_process variable is set to True, and the other thread ignores the client. The extra benefit is that the Handle_clients_thread () method does not require any modification; You can create as many Handle_clients_thread () methods as you want.










Boost.asio C + + Network programming Translator (22)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.