The implementation of multi-threaded concurrent processing of ASIO network library in boost, and the scheduling and thread safety of ASIO in multithreaded model.

Source: Internet
Author: User

1. Implement multi-Threading method:

It's actually multiple threads calling Io_service::run at the same time

for (int i = 0; I! = m_nthreads; ++i)
{
Boost::shared_ptr<boost::thread> pTh (New Boost::thread (
Boost::bind (&boost::asio::io_service::run,&m_ioservice));
M_listthread.push_back (PTH);
}

2, multi-threaded scheduling situation:

ASIO specifies that the event completion processor can only be called in a thread that calls Io_service::run.

Note: The event completion processor is your async_accept, async_write, and so on registered handle, similar to the callback thing.

Single Thread:

If only one thread calls Io_service::run, the event completion processor can only be executed in this thread, according to ASIO. That is, all of your code runs in the same thread, so the access to the variable is secure.

Multithreading:

If there are multiple threads calling Io_service::run at the same time for multithreading concurrent processing. For ASIO, these threads are equal and have no primary or secondary points. If you post a request such as Async_write completed, ASIO will randomly activate the thread that calls Io_service::run. The event Completion handler (Async_write registered handle) is called in this thread. If your code takes a long time, and this time you post another async_write request to complete, ASIO will not wait for your code to finish, it will call the Async_write registered handle in another call Io_service::run thread. That is, the event completion processor that you register may be called in multiple threads at the same time.

Of course you can use Boost::asio::io_service::strand to make the call to complete the event handler at the same time with only one, such as the following code:

Socket_.async_read_some (Boost::asio::buffer (Buffer_),
Strand_.wrap (
Boost::bind (&connection::handle_read, Shared_from_this (),
Boost::asio::p Laceholders::error,
Boost::asio::p laceholders::bytes_transferred)));

...

Boost::asio::io_service::strand Strand_;

When the Async_read_som is finished and the Handle_read is dropped, it must wait for the other handle_read call to complete before it can be executed (Async_read_som caused by the handle_read call).

When multithreading is called, there is also an important problem, which is the disorder. For example, if you post multiple async_write in a short time, the call to complete the processor is not called in the order in which you post the async_write. ASIO the first call to complete the event handler, it may be the result of the second async_write return, or it may be the 3rd time. The same is true with strand. Strand just guarantees that only one complete processor is running at a time, but it does not guarantee the order.

Code test:

Server:

After compiling the following code, use the cmd command prompt for the descendant parameter <IP> <port> <threads> call

For example: Test.exe 0.0.0.0 3005 10

Client side uses Windows-brought Telnet

cmd command prompt:

Telnet 127.0.0.1 3005

Principle: After the client connection succeeds, the Boost::asio::async_write calls 100 times at the same time to send the data to the client, and prints the call sequence number, and the thread ID in the completion event handler.

Core code:

void Start ()
{
for (int i = 0; I! =; ++i)
{
Boost::shared_ptr<string> pStr (new string);
*pstr = boost::lexical_cast<string> (boost::this_thread::get_id ());
*pstr + = "\ r \ n";
Boost::asio::async_write (M_nsocket,boost::asio::buffer (*PSTR),
Boost::bind (&cmytcpconnection::handlewrite,shared_from_this (),
Boost::asio::p Laceholders::error,
Boost::asio::p laceholders::bytes_transferred,
Pstr,i)
);
}
}

Remove Boost::mutex::scoped_lock lk (M_iomutex); The effect is more obvious.

void Handlewrite (const boost::system::error_code& Error
, std::size_t bytes_transferred
,boost::shared_ptr<string> pstr,int NIndex)
{
if (!error)
{
Boost::mutex::scoped_lock LK (M_iomutex);
cout << "Send serial number =" << nIndex << ", Thread id=" << boost::this_thread::get_id () << Endl;
}
Else
{
cout << "Connection disconnect" << Endl;
}
}

Full code:

#include <boost/bind.hpp>
#include <boost/shared_ptr.hpp>
#include <boost/enable_shared_from_this.hpp>
#include <boost/asio.hpp>
#include <boost/lexical_cast.hpp>
#include <boost/thread.hpp>
#include <boost/thread/mutex.hpp>
#include <string>
#include <iostream>


Using Std::cout;
Using Std::endl;
Using Std::string;
Using Boost::asio::ip::tcp;


Class Cmytcpconnection
: Public boost::enable_shared_from_this<cmytcpconnection>
{
Public
Cmytcpconnection (Boost::asio::io_service &ser)
: M_nsocket (Ser)
{
}
typedef boost::shared_ptr<cmytcpconnection> CPMYTCPCON;


Static Cpmytcpcon CreateNew (boost::asio::io_service& io_service)
{
Return Cpmytcpcon (New Cmytcpconnection (Io_service));
}



Public
void Start ()
{
for (int i = 0; I! =; ++i)
{
Boost::shared_ptr<string> pStr (new string);
*pstr = boost::lexical_cast<string> (boost::this_thread::get_id ());
*pstr + = "\ r \ n";
Boost::asio::async_write (M_nsocket,boost::asio::buffer (*PSTR),
Boost::bind (&cmytcpconnection::handlewrite,shared_from_this (),
Boost::asio::p Laceholders::error,
Boost::asio::p laceholders::bytes_transferred,
Pstr,i)
);
}
}
tcp::socket& socket ()
{
return m_nsocket;
}
Private
void Handlewrite (const boost::system::error_code& Error
, std::size_t bytes_transferred
,boost::shared_ptr<string> pstr,int NIndex)
{
if (!error)
{
Boost::mutex::scoped_lock LK (M_iomutex);
cout << "Send serial number =" << nIndex << ", Thread id=" << boost::this_thread::get_id () << Endl;
}
Else
{
cout << "Connection disconnect" << Endl;
}
}
Private
Tcp::socket M_nsocket;
Boost::mutex M_iomutex;
};


Class Cmyservice
: Private boost::noncopyable
{
Public
Cmyservice (String const &strip,string const &strport,int nthreads)
: M_tcpacceptor (M_ioservice)
, M_nthreads (nthreads)
{
Tcp::resolver Resolver (m_ioservice);
Tcp::resolver::query query (Strip,strport);
Tcp::resolver::iterator endpoint_iterator = resolver.resolve (query);
Boost::asio::ip::tcp::endpoint endpoint = *resolver.resolve (query);
M_tcpacceptor.open (Endpoint.protocol ());
M_tcpacceptor.set_option (Boost::asio::ip::tcp::acceptor::reuse_address (true));
M_tcpacceptor.bind (endpoint);
M_tcpacceptor.listen ();


Startaccept ();
}
~cmyservice () {Stop ();}
Public
void Stop ()
{
M_ioservice.stop ();
for (Std::vector<boost::shared_ptr<boost::thread>>::const_iterator it = M_listthread.cbegin ();
It! = M_listthread.cend (); + + IT)
{
(*it)->join ();
}
}
void Start ()
{
for (int i = 0; I! = m_nthreads; ++i)
{
Boost::shared_ptr<boost::thread> pTh (New Boost::thread (
Boost::bind (&boost::asio::io_service::run,&m_ioservice));
M_listthread.push_back (PTH);
}
}
Private
void handleaccept (const boost::system::error_code& Error
,boost::shared_ptr<cmytcpconnection> newconnect)
{
if (!error)
{
Newconnect->start ();
}
Startaccept ();
}


void Startaccept ()
{
Cmytcpconnection::cpmytcpcon newconnect = cmytcpconnection::createnew (M_tcpacceptor.get_io_service ());
M_tcpacceptor.async_accept (Newconnect->socket (),
Boost::bind (&cmyservice::handleaccept, this,
Boost::asio::p laceholders::error,newconnect));
}
Private
Boost::asio::io_service M_ioservice;
Boost::asio::ip::tcp::acceptor M_tcpacceptor;
Std::vector<boost::shared_ptr<boost::thread>> M_listthread;
std::size_t m_nthreads;
};


int main (int argc, char* argv[])
{
Try
{
if (argc! = 4)
{
Std::cerr << "<IP> <port> <threads>\n";
return 1;
}
int nthreads = boost::lexical_cast<int> (argv[3]);
Cmyservice Myser (argv[1],argv[2],nthreads);
Myser.start ();
GetChar ();
Myser.stop ();
}
catch (std::exception& e)
{
Std::cerr << "Exception:" << e.what () << "\ n";
}
return 0;
}

The test findings are consistent with the theory above, the sending sequence number is chaotic, and the thread ID is not the same.

ASIO number of threads in a multi-threaded thread:

As a server, you should use the CPU as much as possible without considering the power saving situation. In other words, in order for the CPU to be busy, your number of threads should be greater than or equal to the number of CPU cores on your computer (one core runs a thread). The specific value does not have the optimal scheme, most people use the CPU core number * + 2 for this scenario, but it does not necessarily suit your situation.

ASIO implementations in systems such as Windows XP:

ASIO uses the completion port under Windows, and if the request you post does not complete, then these threads are waiting for GetQueuedCompletionStatus to return, that is, waiting for the kernel object, where the thread is not consuming CPU time.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.