Boost. Asio c ++ network programming translation (24), boost. asio Network Programming
Multithreading In the Asynchronous Server I have displayed on the client and the server in Chapter 4th. The Asynchronous Server is single-threaded, and all things happen in main:
int main() { talk_to_client::ptr client = talk_to_client::new_(); acc.async_accept(client->sock(), boost::bind(handle_
accept,client,_1)); service.run();
}
The beauty of Asynchronization lies in the simplicity of converting a single thread into multiple threads. You can keep multithreading to know that your concurrent client exceeds 200. Then, you can use the following code snippet to convert a single thread into 100 threads:
boost::thread_group threads; void listen_thread() {
service.run(); }
void start_listen(int thread_count) { for ( int i = 0; i < thread_count; ++i)
threads.create_thread( listen_thread);
} int main(int argc, char* argv[]) {
talk_to_client::ptr client = talk_to_client::new_();
acc.async_accept(client->sock(), boost::bind(handle_ accept,client,_1));
start_listen(100);
threads.join_all(); }
Of course, once you select multiple threads, you need to consider thread security. Even if you call async _ * in thread A, its completion process can be called in thread B (because thread B also calls service. run ()). This is not a problem for itself. As long as you follow the logical flow, that is, from async_read () to on_read (), from on_read () to process_request, from process_request to async_write (), from async_write () to on_write (), from on_write () to async_read (), and then in your talk_to_client class, there is no public method called, although different methods can be called in different threads, they will still be called in an orderly manner. Therefore, no mutex is required. This also means that only one asynchronous operation is waiting for a client. In some cases, if we have two asynchronous Methods waiting for a client, you need to mutex. This is because the two waiting operations may be completed at the same time, and then we will call their completion handler functions in the middle of two different threads at the same time. Therefore, thread security is required, that is, mutex. In our Asynchronous Server, we do have two pending operations at the same time:
void do_read() { async_read(sock_, buffer(read_buffer_),MEM_FN2(read_complete,_1,_2), MEM_FN2(on_read,_1,_2));
post_check_ping();
} void post_check_ping() {
timer_.expires_from_now(boost::posix_time::millisec(5000));
timer_.async_wait( MEM_FN(on_check_ping)); }
When we perform a read operation, we asynchronously wait for the read operation to complete and time out. Therefore, thread security is required. My suggestion is that if you are going to use multiple threads, ensure that your class is thread-safe from the beginning. This usually does not affect its performance (you can also set the switch in the configuration ). At the same time, if you are going to use multithreading, you can use it from the very beginning. In this way, you can discover possible problems as soon as possible. Once you find a problem, the first thing you need to check is: will it happen when a single thread is running? If yes, it is very simple; you only need to debug it. Otherwise, you may forget to lock some methods (mutex ). Because our example needs to be thread-safe, I have modified talk_to_client to use mutex. At the same time, we also have a list of client connections. It also requires its own mutex because we sometimes need to access it. It is not easy to avoid deadlocks and memory conflicts. Here is where I need to modify the update_client_changed () method:
void update_clients_changed() { array copy;
{ boost::recursive_mutex::scoped_lock lk(clients_cs); copy = clients; }
for( array::iterator b = copy.begin(), e = copy.end(); b != e; ++b)
(*b)->set_clients_changed();
}
What you need to avoid is that two mutex values are locked at the same time (this will lead to a deadlock ). In our example, we do not want the clients_cs and the cs _ mutex of a client to be locked at the same time.
Asynchronous Boost. Asio also allows you to run any of your methods asynchronously. You only need to use the following code snippet:
void my_func() { ...
} service.post(my_func);
This ensures that my_func is called in the middle of a thread that calls service. run. You can also call a handler method asynchronously. the handler of the method will notify you when the method ends. The pseudocode is as follows:
void on_complete() { ...
} void my_func() {
...
service.post(on_complete); }
Async_call (my_func );
There is no async_call method here, so you need to create it yourself. Fortunately, it is not very complex. refer to the following code snippet:
struct async_op : boost::enable_shared_from_this<async_op>, ... { typedef boost::function<void(boost::system::error_code)>
completion_func; typedef boost::function<boost::system::error_code ()> op_func; struct operation { ... }; void start() {
{ boost::recursive_mutex::scoped_lock lk(cs_); if ( started_) return; started_ = true; }
boost::thread t( boost::bind(&async_op::run,this)); }
void add(op_func op, completion_func completion, io_service &service) {
self_ = shared_from_this(); boost::recursive_mutex::scoped_lock lk(cs_); ops_.push_back( operation(service, op, completion)); if ( !started_) start();
}
Void stop (){
boost::recursive_mutex::scoped_lock lk(cs_); started_ = false; ops_.clear();
} Private:
boost::recursive_mutex cs_;
std::vector<operation> ops_; bool started_; ptr self_;};
The async_op method creates a background thread, which will run () All the asynchronous operations you add () to it. To make things easier, each operation contains the following content:
- An asynchronous call Method
- Handler that is called when the first method ends
- The io_service instance for handler processing is run. This is also the place to notify you when you complete the process. Refer to the following code:
struct async_op : boost::enable_shared_from_this<async_op> , private boost::noncopyable {
struct operation { operation(io_service & service, op_func op, completion_
func completion) : service(&service), op(op), completion(completion) , work(new io_service::work(service))
{} operation() : service(0) {} io_service * service; op_func op; completion_func completion; typedef boost::shared_ptr<io_service::work> work_ptr; work_ptr work;
};
...};
They are contained in the operation struct. Note that when an operation is waiting, we construct an io_service: work instance in the constructor of the operation to ensure that the service is completed until the asynchronous call is completed. run () will not end (when the io_service: work Instance remains active, service. run () will think it has a job to do ). Refer to the following code snippet:
struct async_op : ... { typedef boost::shared_ptr<async_op> ptr; static ptr new_() { return ptr(new async_op); } ... void run() {
while ( true) { { boost::recursive_mutex::scoped_lock lk(cs_);
if ( !started_) break; } boost::this_thread::sleep( boost::posix_
time::millisec(10)); operation cur;
));
}
{ boost::recursive_mutex::scoped_lock lk(cs_); if ( !ops_.empty()) {
cur = ops_[0]; ops_.erase( ops_.begin()); }}
if ( cur.service) cur.service->post(boost::bind(cur.completion, cur.op()
self_.reset(); }
};
The run () method is the background thread. It only observes whether there is any work to be done. If so, it runs these asynchronous methods one by one. At the end of each call, it calls the related completion method. To test, we create a compute_file-checksum method that will be asynchronously executed
size_t checksum = 0;
boost::system::error_code compute_file_checksum(std::string file_name) {
HANDLE file = ::CreateFile(file_name.c_str(), GENERIC_READ, 0, 0,
OPEN_ALWAYS, FILE_ATTRIBUTE_NORMAL | FILE_FLAG_OVERLAPPED, 0);
windows::random_access_handle h(service, file); long buff[1024]; checksum = 0; size_t bytes = 0, at = 0; boost::system::error_code ec;
while ( (bytes = read_at(h, at, buffer(buff), ec)) > 0) { at += bytes; bytes /= sizeof(long); for ( size_t i = 0; i < bytes; ++i)
checksum += buff[i];
}
return boost::system::error_code(0, boost::system::generic_ category());
} void on_checksum(std::string file_name, boost::system::error_code) {
std::cout << "checksum for " << file_name << "=" << checksum << std::endl;
} int main(int argc, char* argv[]) {
std::string fn = "readme.txt";
async_op::new_()->add( service, boost::bind(compute_file_ checksum,fn),
boost::bind(on_checksum,fn,_1));
service.run(); }
Note that all I show you is the possibility of Asynchronously calling a method. In addition to implementing a background thread like me, you can use an internal io_service instance and then push the Asynchronous Method to call this instance. This serves as an exercise for a reader. You can also extend this class to show the progress of an asynchronous operation (for example, the percentage ). In this case, you can display the progress in the main thread through a progress bar.