[Automatic message distribution in C ++ 4] Using IDL to build Chat Server

Source: Internet
Author: User
ArticleDirectory
    • 1. scenario settings
    • 2. Server Module Design
    • 3. Summary

The previous blog explains how to implement the IDL parser. This article uses the IDL parser to build a chat server.Program. This program is used to test the function of the IDL parser. The network layer uses the ffown library described in the front blog. We only need to define the chat. IDL file, and the IDL parser automatically generates message emissions.CodeIt saves you the hassle of writing message parsing and code judgment every time.

IDL parser Introduction: http://www.cnblogs.com/zhiranok/archive/2012/02/23/json_to_cpp_struct_idl_parser_second.html

Ffown socket Library: http://www.cnblogs.com/zhiranok/archive/2011/12/24/cpp_epoll_socket.html

1. scenario settings

1>. the user logs on to the system and checks whether to re-log on to the system. If the user logs on to the system, an error is returned. (because no passwor authentication is available, the user has to use the "preemptive registration" method. The UID can be logged on first ). The user must obtain the online user ID list after logging on. At the same time, the user's online messages should also be pushed to other online users.

2> log out of a user, delete user information from the server, and disable socket. Broadcast to all online users.

3>. Chat. Users can send chat messages to a user online, chat with multiple users, or broadcast messages to all users.

2. Server Module Design

 

1>. Network Layer

Developing network programs must have a stable and efficient network library framework. Currently, popular C ++-Based Network Libraries include:

A. Boost ASIO

B. libevent

C. UNIX socket API

I strongly recommend that you use ASIO. I am familiar with many server programs developed over the past two years. I have read the source code of ASIO and gained some valuable asynchronous Io design skills. Some people on the internet commented that ASIO is too big and too bloated. I don't think so. Although ASIO adds a lot of encapsulation and macros for cross-platform implementation, the encapsulation of the socket corresponding to ASIO is relatively simple. The most clever of ASIO is that all Io models are built on io_service, so that the network layer is very easy to use multithreading. For the analysis of ASIO, see the blog at the front: Workshop. Another advantage of using ASIO is that you can fully enjoy the convenience brought by boost libraries (such as Lamda, shared_ptr, and thread) and increase productivity immediately. I personally think that using ASIO requires a certain pattern basis. I also encapsulated a network layer using ASIO. For more information, see:

Http://www.cnblogs.com/zhiranok/archive/2011/12/18/ffasio.html

Of course, engineers who like to work on the underlying layer all love to build their own socket communication library, which is understandable (even if it is a bit repetitive). After all, individuals or teams can have full control over the quality of the code library, it is easy to troubleshoot problems, and it does not require a large amount of work. We encountered a problem when using ASIO. ASIO asynchronous connection in version 1.39 has a bug and there is a very small probability that the callback function cannot be called (high concurrency test ), update to 1-44. I personally think that for a team, a mature network framework is the cornerstone of success.

In this example, the network layer transmission protocol is very simple. The length of the message body (in string form) + \ r \ n + body of the message body can be directly tested using telnet.

2>. message distribution layer

I have used Google protocol and Facebook thrift. Protocol only encapsulates message encapsulation and does not have the message dispatching function. Thrift is actually an RPC framework, automatically generate client code or non-blocking Server framework code. However, our real-time online game backend programs are all message-based. Therefore, it makes sense to develop something similar to protoco. It is used to compile the message IDL file and define the Request Message format and Response Message format. The IDL file actually plays the role of describing the document with the client interface. Next, use the IDL parser to analyze the IDL Automatic Generation of message dispatching code.

For example, in the chat server example, I defined chat. IDL. The message dispatching framework code is generated:

Idl_generator.py IDL/chat. IDL include/msg_def.h

The generated code file is msg_def.h.

The IDL file is defined:

StructLogin_req_t
{
Uint32 uid;
};

StructChat_to_some_req_t
{
Array <uint32> dest_uids;
StringContent;
};

StructUser_login_ret_t
{
Uint32 uid;
};

StructUser_logout_ret_t
{
Uint32 uid;
};

StructOnline_list_ret_t
{
Array <uint32> UIDs;
};

StructChat_content_ret_t
{
Uint32 from_uid;
StringContent;
};

 

3> domain logic layer

The domain logic should be the same as the model created in the requirement analysis, which is ddd-driven. So try not to integrate too much code at the network layer or message parsing layer. My idea is to use the IDL Parser for message parsing and the mature framework for the network layer, so we only need to focus on testing the logic layer correctly.

This chat server only needs to test the functions of the IDL parser, so it does not integrate many functions.

The main code snippets are:

 Int Chat_service_t: handle_broken (socket_ptr_t SOCK _)
{
Uid_t * user = SOCK _-> get_data <uid_t> ();
If (Null = user)
{
Delete SOCK _;
Return 0 ;
}

Lock_guard_t Lock (M_mutex );
M_clients.erase (* User );

User_logout_ret_t ret_msg;
Ret_msg.uid = * user;
String Json_msg = ret_msg.encode_json ();
Delete SOCK _;

Map <uid_t, socket_ptr_t>: iterator it = m_clients.begin ();
For (; It! = M_clients.end (); ++ it)
{
It-> second-> async_send (json_msg );
}
Return 0 ;
}


Int Chat_service_t: handle_msg ( Const Message_t & MSG _, socket_ptr_t SOCK _)
{
Try
{
M_msg_dispather.dispath (MSG _. get_body (), SOCK _);
}
Catch (Exception & E)
{
SOCK _-> async_send ( " MSG not supported! " );
Logtrace (chat_service, " Chat_service_t: handle_msg exception <% S> " , E. What ()));
SOCK _-> close ();
}
Return 0 ;
}

Int Chat_service_t: handle (shared_ptr_t <login_req_t> req _, socket_ptr_t SOCK _)
{
Logtrace (chat_service, " Chat_service_t: handle login_req_t uid <% u> " , Req _-> UID ));
Lock_guard_t Lock (M_mutex );

Pair <Map <uid_t, socket_ptr_t>: iterator, Bool > Ret = m_clients.insert (make_pair (req _-> uid, SOCK _));
If ( False = Ret. Second)
{
SOCK _-> close ();
Return - 1 ;
}

Uid_t * user = New Uid_t (req _-> UID );
SOCK _-> set_data (User );

User_login_ret_t login_ret;
Login_ret.uid = req _-> uid;
String Login_json = login_ret.encode_json ();

Online_list_ret_t online_list;

Map <uid_t, socket_ptr_t>: iterator it = m_clients.begin ();
For (; It! = M_clients.end (); ++ it)
{
Online_list.uids.push_back (IT-> first );
It-> second-> async_send (login_json );
}

SOCK _-> async_send (online_list.encode_json ());
Return 0 ;
}

Int Chat_service_t: handle (shared_ptr_t <chat_to_some_req_t> req _, socket_ptr_t SOCK _)
{
Lock_guard_t Lock (M_mutex );

Chat_content_ret_t content_ret;
Content_ret.from_uid = * SOCK _-> get_data <uid_t> ();
Content_ret.content = req _-> content;

String Json_msg = content_ret.encode_json ();
For (Size_t I = 0 ; I <req _-> dest_uids.size (); ++ I)
{
M_clients [req _-> dest_uids [I]-> async_send (json_msg );
}
Return 0 ;
}

For the complete code, see:

Https://ffown.googlecode.com/svn/trunk/example/chat_server

3. Summary

1. ffown is used at the network layer. Currently, no socket management module is used for heartbeat.

2. logs are directly completed using printf. A log module should be used to format and output logs.

3. The IDL message dispatching framework supports the JSON string protocol. The binary protocol can be added later, and the network layer should have the compression and transmission function.

4. As it is just a sample program, I implemented a simple client using python.

 

 

 

 

 

 

 

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.