(Zt) Ace efficient proactor programming framework 1 clienthandle

Source: Internet
Author: User
(From: http://blog.vckbase.com/bastet/archive/2005/08/14/10865.aspx)

1. Using proactor in Win32 can achieve almost raw iocp efficiency. Due to the encapsulation relationship, it should be a little less efficient.

The general method of client processing is as follows:
// Process client connection messages
Class clienthandler: Public ace_service_handler
{
Public:
/** Constructor
*
*
*/
Clienthandler (unsigned int client_recv_buf_size = server_client_receive_buf_size)
: _ Read_msg_block (client_recv_buf_size), _ io_count (0)
{
}

 
~ Clienthandler (){}

/**
* Initialization, because the clienthandler memory pool may be used, and this pool does not have to use new
*/
Void Init ();

/** Clear the function because the memory pool may be used.
*
*/
Void Fini ();

// Check whether the function times out.

Void check_time_out (time_t cur_time );

Public:

/** Call after the client successfully connects to the server
*
* \ Param handle socket handle
* \ Param & message_block data read for the first time (unused)
*/

// Called by acceptor !!!
Virtual void open (ace_handle handle, ace_message_block & message_block );

/** Process the network read Operation End message
*
* \ Param & Result: read operation result
*/
Virtual void handle_read_stream (const ace_asynch_read_stream: Result & result );

/** Process the network write operation End message
*
* \ Param & Result write operation result
*/
Virtual void handle_write_stream (const ace_asynch_write_stream: Result & result );

PRIVATE:

// ** Generate a network Read Request
*
* \ Param void
* \ Return 0-success,-1 failed
*/
Int initiate_read_stream (void );

/** Generate a Write Request
*
* \ Param mb data to be sent
* \ Param nbytes data size to be sent
* \ Return 0-success,-1 failed
*/
Int initiate_write_stream (ace_message_block & MB, size_t nbytes );
 
/**
*
* \ Return checks whether deletion can be performed. A reference count is used. + 1 for each outbound Io and-1 for each Io success
*/
Int check_destroy ();
 
// Asynchronous read
Ace_asynch_read_stream _ rs;

// Asynchronous write
Ace_asynch_write_stream _ ws;

// The receiving buffer only needs one read, because I never thought about how to read more. I still don't know why I want to read more. There are many issues to consider if I want to read more.
Ace_message_block _ read_msg_block;

// Socket handle. This can be avoided, because the base class has a handler in it.
// Ace_handle _ HANDLE;

// A lock. The client has something to lock. Note that you must use ace_recursive_thread_mutex instead of ace_thread_mutex. You can re-import the lock and use entercriticalsection in Win32, high Efficiency
Ace_recursive_thread_mutex _ lock;
 
// The number of I/O outside is actually the reference count. When the value is 0, this item is switched off.
Long _ io_count;

// Check for timeout. Close it after a while.

Time_t _ last_net_io;

PRIVATE:

// If you want to use another model, you only need one or two out-of-office reads. If you think about it, generally the memory is sufficient and you don't have to worry about it.

// Ace_message_block _ send_msg_blocks [2];

// Ace_message_block & _ sending_msg_block;

// Ace_message_block & _ idle_msg_block;

PRIVATE:
 
Public:
// Todo: Move to prriva and use friend class !!!

// It is only for higher efficiency. The STL list is not used because I have no node_allocator available so far, so there is a problem in efficiency.
Clienthandler * _ next;

Clienthandler * Next () {return _ next ;}

Void next (clienthandler * OBJ) {_ next = OBJ ;}

};

// This is the specific implementation. In some cases, it is messy and you are too lazy to manage it. Some of the locks are incorrect. I am too lazy to change it. It is not too late to do it when there is an error or a bottleneck.

Void clienthandler: handle_read_stream (const ace_asynch_read_stream: Result & result)
{
_ Last_net_io = ace_ OS: Time (null );
Int byterecved = result. bytes_transferred ();
If (result. Success () & (byterecved! = 0 ))
{
// Ace_debug (lm_debug, "Cycler completed: % d \ n", byterecved ));

// Process the data
If (handle_received_data () = true)
{
// Ace_debug (lm_debug, "Go on reading... \ n "));

// Push Dongdong to the header to process the stick package
_ Read_msg_block.crunch ();
Initiate_read_stream ();
}
}

// You do not want to use ace_atom_op in this place, because there is a lock, and generally the lock will be used, no matter. If you don't care, you should use ace_atom_op to achieve the best efficiency.

{
Ace_guard <ace_recursive_thread_mutex> locker (_ Lock );
_ Io_count --;
}
Check_destroy ();
}

Void clienthandler: Init ()
{

// Initialize the data, not in the constructor.
_ Last_net_io = ace_ OS: Time (null );
_ Read_msg_block.rd_ptr (_ read_msg_block.base ());
_ Read_msg_block.wr_ptr (_ read_msg_block.base ());
This-> handle (ace_invalid_handle );
}

Bool clienthandler: handle_received_data ()
{

...... Handle it yourself
Return true;
}

// ================================================ ======================================
Void clienthandler: handle_write_stream (const ace_asynch_write_stream: Result & result)
{
// The message is sent successfully and release is disabled.
// This cannot have multiple release, which can be directly deleted by XX.
// Result. message_block (). Release ();
Msgblockmanager: get_instance (). release_msg_block (& result. message_block ());

{
Ace_guard <ace_recursive_thread_mutex> locker (_ Lock );
_ Io_count --;
}
Check_destroy ();
}

// Bool clienthandler: Destroy ()
//{
// Func_enter;
// Clientmanager: get_instance (). release_client_handle (this );
// Func_leave;
// Return false;
//}

Int clienthandler: initiate_read_stream (void)
{
Ace_guard <ace_recursive_thread_mutex> locker (_ Lock );

// Considering the stick package
If (_ rs. Read (_ read_msg_block, _ read_msg_block.space () =-1)
{
Ace_error_return (lm_error, "% P \ n", "ace_asynch_read_stream: Read"),-1 );
}
_ Io_count ++;
Return 0;
}

/** Generate a Write Request
*
* \ Param mb data to be sent
* \ Param nbytes data size to be sent
* \ Return 0-success,-1 failed
*/
Int clienthandler: initiate_write_stream (ace_message_block & MB, size_t nbytes)
{
Ace_guard <ace_recursive_thread_mutex> locker (_ Lock );
If (_ ws. Write (MB, nbytes) =-1)
{
MB. Release ();
Ace_error_return (lm_error, "% P \ n", "ace_asynch_write_file: write"),-1 );
}
_ Io_count ++;
Return 0;
}

Void clienthandler: open (ace_handle handle, ace_message_block & message_block)
{
// Func_enter;
_ Last_net_io = ace_ OS: Time (null );
_ Io_count = 0;
If (_ ws. Open (* This, this-> handle () =-1)
{
Ace_error (lm_error, "% P \ n", "ace_asynch_write_stream: Open "));
}
Else if (_ rs. Open (* This, this-> handle () =-1)
{
Ace_error (lm_error, "% P \ n", "ace_asynch_read_stream: Open "));
}
Else
{
Initiate_read_stream ();
}

Check_destroy ();
// Func_leave;
}

Void clienthandler: Fini ()
{
}

Void clienthandler: check_time_out (time_t cur_time)
{
// Ace_guard <ace_recursive_thread_mutex> locker (_ Lock );
// Ace_debug (lm_debug, "cur_time is % u, last Io is % u \ n", cur_time, _ last_net_io ));

// Check whether the value is 0.
If (this-> handle () = ace_invalid_handle)
Return;
If (cur_time-_ last_net_io> client_time_out_seconds)
{
Ace_ OS: Shutdown (this-> handle (), sd_both );
Ace_ OS: closesocket (this-> handle ());
This-> handle (ace_invalid_handle );
}
}

Int clienthandler: check_destroy ()
{
{
Ace_guard <ace_recursive_thread_mutex> locker (_ Lock );
If (_ io_count> 0)
Return 1;
}
Ace_ OS: Shutdown (this-> handle (), sd_both );
Ace_ OS: closesocket (this-> handle ());
This-> handle (ace_invalid_handle );

// Place it To the memory pool.
Clientmanager: get_instance (). release_client_handle (this );
// Delete this;
Return 0;
}

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.