Boost.asio C + + Network programming Translator (24)

Source: Internet
Author: User

Multithreading in the asynchronous service sideI'm single-threaded at the end of the 4th chapter of the client and server display, and everything happens in Main ():
int main () {       talk_to_client::p TR client = Talk_to_client::new_ ();       Acc.async_accept (Client->sock (), Boost::bind (Handle_
accept,client,_1));    Service.run ();

}

The beauty of asynchrony is the simplicity of turning a single thread into multi-threading. You can always keep multithreading to know that your concurrent clients are over 200. You can then use the following code snippet to turn a single thread into 100 threads:
Boost::thread_group Threads;   void Listen_thread () {
       Service.run ();   }
   void Start_listen (int thread_count) {for       (int i = 0; i < Thread_count; ++i)
           Threads.create_thread (Listen_thread);
   }   int main (int argc, char* argv[]) {
       Talk_to_client::p TR client = Talk_to_client::new_ ();
       Acc.async_accept (Client->sock (), Boost::bind (Handle_   accept,client,_1));
       Start_listen (100);
       Threads.join_all ();   }
of course, once you have selected multi-threading, you need to consider thread safety. Although you have called async_* in thread A, its completion process can be called in thread B (because it also calls Service.run ()). It's not a problem for itself. As long as you follow the logic flow, that is, from Async_read () to On_read (), from On_read () to Process_request, from Process_request to Async_write (), from Async_write () To On_write (), from On_write () to Async_read (), and not to public methods that are called in your talk_to_client class, although different methods can be called in different threads, they will be called in an orderly manner. Thus, no mutex is required. This also means that for a client, there will only be one asynchronous operation waiting. If there is a situation where a client we have two async methods waiting, you need a mutex. This is because two of the waiting operations may be completed at exactly the same time, and then we will call their completion handler in the middle of two different threads. Therefore, there is a need for thread safety, which is the need to use mutexes. In our asynchronous server, we do have two waiting operations at the same time:
void Do_read () {       async_read (sock_, buffer (read_buffer_), mem_fn2 (read_complete,_1,_2), mem_fn2 (on_read,_1,_2));
       Post_check_ping ();
   }   void Post_check_ping () {
       Timer_.expires_from_now (boost::p osix_time::millisec (5000));
       Timer_.async_wait (MEM_FN (on_check_ping));   }
when we do a read operation, we wait for the read operation to complete and time out asynchronously. So, this requires thread safety. My advice is that if you're going to use multi-threading, it's safe to start with your class being threaded. Typically this does not affect its performance (you can also set a switch in the configuration, of course). Also, if you are ready to use multi-threading, use it from the beginning. In this way, you can find out the possible problems as soon as they occur. Once you find a problem, the first thing you need to check is: Will it happen when a single thread runs? If it is, it's simple; just debug it. Otherwise, you may have forgotten about some methods locking (mutexes). because our example needs to be thread-safe, I've modified the talk_to_client to use mutexes. At the same time, we also have a list of client connections, it also needs its own mutex because we sometimes need to access it. avoiding deadlocks and memory conflicts is not easy. Here's where I need to make changes to the Update_client_changed () method:
void Update_clients_changed () {       array copy;
       {Boost::recursive_mutex::scoped_lock lk (clients_cs);         copy = clients; }
       for (Array::iterator B = Copy.begin (), E = Copy.end (); b! = e;   ++B)
           (*b)->set_clients_changed ();

}

What you need to avoid is that two mutexes are locked at the same time (this results in a deadlock). In our case, we do not want to clients_cs and a client Cs_ mutex is locked at the same time
Asynchronous OperationBoost.asio also allows you to run any of your methods asynchronously. You just need to use the following code snippet:
void My_func () {      ...
   }   Service.post (My_func);
This guarantees that My_func is called in the middle of a thread that calls Service.run (). You can also invoke a method that has completed processing handler asynchronously, and the method handler notifies you at the end of the method. The pseudo code is as follows:
void On_complete () {      ...
   }   void My_func () {

...

       Service.post (On_complete);   }

Async_call (My_func);

There is no Async_call method, so you need to create it yourself. Fortunately, it is not very complex, refer to the following code snippet:
struct ASYNC_OP:BOOST::ENABLE_SHARED_FROM_THIS<ASYNC_OP> {       typedef boost::function<void (Boost::system::error_code) >
   Completion_func;       typedef boost::function<boost::system::error_code () > Op_func;       struct operation {...};       void Start () {
           {Boost::recursive_mutex::scoped_lock lk (cs_);             if (started_) return; Started_ = true; }
           Boost::thread T (Boost::bind (&async_op::run,this));       }
       void Add (Op_func op, completion_func completion, Io_service   &service) {
           Self_ = Shared_from_this ();           Boost::recursive_mutex::scoped_lock LK (cs_);           Ops_.push_back (Operation (Service, OP, Completion));           if (!started_) Start ();

}

void Stop () {

        Boost::recursive_mutex::scoped_lock LK (cs_);        Started_ = false; Ops_.clear ();

} Private:

    Boost::recursive_mutex Cs_;
    Std::vector<operation> Ops_; BOOL Started_; PTR self_;};
The Async_op method creates a background thread that runs (run ()) and all the asynchronous operations that you add (add ()) to it. To make things easier, each operation contains the following:
    • A method for an asynchronous call
    • When the first method ends, one of the finished processing handler is called
    • The Io_service instance that finishes processing handler is run. This is where you are notified when you are done. Refer to the following code:
    1.  struct Async_op:boost::enable_shared_from_this<async_op>, private boost::noncopyable {  
        struct operation {operation (Io_serv Ice & Service, Op_func op, completion_  
        F UNC completion): Service (&service), OP (OP), completion (completion), Work (New Io_               Service::work (service))  
        {}               Operation (): Service (0) {} io_service * service;               Op_func op;               Completion_func completion;               typedef boost::shared_ptr<io_service::work> WORK_PTR;
      Work_ptr work;  

};

... };

they are contained within the operation structure. Note that when there is an operation waiting, we construct a io_service::work instance in the constructor of the operation to ensure that service.run () does not end until we complete our asynchronous call (when the Io_service::work instance remains active, Service.run () will think it has work to do). Refer to the following code snippet:
struct ASYNC_OP: ... {       typedef boost::shared_ptr<async_op> PTR;       static ptr New_ () {return ptr (new async_op);}       ...       void Run () {
           while (true) {               {Boost::recursive_mutex::scoped_lock lk (Cs_);
                 if (!started_) break; }               boost::this_thread::sleep (boost::p osix_
   Time::millisec (Ten));               Operation Cur;

));
}

{Boost::recursive_mutex::scoped_lock lk (cs_);  if (!ops_.empty ()) {
      cur = ops_[0]; Ops_.erase (Ops_.begin ());}  }
if (cur.service)    cur.service->post (Boost::bind (Cur.completion, Cur.op ()
           Self_.reset ();       }

};

The run () method is a background thread; it only observes if there is work to be done, and if so, runs the async methods one by one. At the end of each call, it invokes the relevant completion processing method. to test, we create a compute_file-checksum method that will be executed asynchronously
size_t checksum = 0;
   Boost::system::error_code compute_file_checksum (std::string file_name)   {
HANDLE file =:: CreateFile (File_name.c_str (), generic_read, 0, 0,
           Open_always, File_attribute_normal | file_flag_overlapped, 0);
       Windows::random_access_handle h (service, file);       Long buff[1024];       checksum = 0;       size_t bytes = 0, at = 0;       Boost::system::error_code EC;
       while (bytes = Read_at (h, in, buffer (buff), EC)) > 0) {at           + = bytes; bytes/= sizeof (long);           for (size_t i = 0; i < bytes; ++i)
               Checksum + = Buff[i];

}

       Return Boost::system::error_code (0, Boost::system::generic_   category ());
   }   void On_checksum (std::string file_name, Boost::system::error_code) {
       Std::cout << "checksum for" << file_name << "=" << checksum <<   Std::endl;
   }   int main (int argc, char* argv[]) {
       std::string fn = "Readme.txt";
       Async_op::new_ ()->add (service, Boost::bind (Compute_file_   checksum,fn),
                                       Boost::bind (on_checksum,fn,_1));
       Service.run ();   }
Notice what I'm showing you is just one possible way to implement asynchronous invocation of a method. Instead of implementing a background thread like I do, you can use an internal io_service instance and then push the Async method to the instance call. This as a practice to leave a reader. You can also extend this class to show the progress of an asynchronous operation (for example, using percentages). In this case, you can show progress through a progress bar on the main thread.




Boost.asio C + + Network programming Translator (24)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.