High performance asynchronous RPC framework KISS-RPC introduction and testing

Source: Internet
Author: User
This is a creation in Article, where the information may have evolved or changed.

KISS-RPC Introduction:
Features: analog stack call mode, support multi-value return, call simple security, the server uses multi-threaded asynchronous mode, mining server performance. The client supports multi-threaded synchronous and asynchronous mode, time-out mechanism, Linux support Epoll network model, analogy GRPC,THRIFT,DUBBO fast several times even dozens of times.

Environment: Linux, UNIX, Windows, MacOS
Transport protocol: Capnproto
Development language: Dlang
Compiler: DMD
Github:https://github.com/huntlabs/kiss-rpc

Developer NOTES: Development notes

Kiss RPC Synchronous and asynchronous tests:

Environment: Ubuntu 16.04 LTS (64-bit)

Hardware: Xeon CPU E3-1230@3.3ghz x 8

Memory: 8G

Network: localhost (local loopback)

1. Multithreaded asynchronous non-blocking test

单链接20万次rpc调用耗时为4秒,每秒的qps数量为5万左右:

1000 Concurrent 1000 calls, 1 million RPC requests, took a total of 28 seconds. The QPS is about 35,000 times per second:


2.

Multithreaded synchronization blocking test:

单链接100

Million-time RPC

Call time is 53
Seconds, QPS per second

Quantity is 1.8

Million times or so:


1000

Connections 1000

Call, Total 100

Million-time RPC

Call, time consuming 46

Seconds, QPS per second

Is 2.1

Million times:


Other RPC Performance comparison tests: (http://blog.csdn.net/jek123456/article/details/53395206)


The massive Internet business system can only rely on the distributed architecture to solve, and the cornerstone of distributed development is RPC; This paper is mainly aimed at two open source RPC framework (GRPC, Apache Thrift), and with Golang, C + + two development languages for performance comparison analysis.

Test scenario client, server is a single process, long connection, in a single connection to initiate a 1w (5w) RPC call, the calculation time-consuming;
Client, server is a single process, short connection, a total of 1w (5w) connections, each connection to a single RPC call, the calculation time-consuming;
Concurrent 4 client processes, each process long connection 10w RPC, service end single process multithreading (association), computation time-consuming;

Due to different languages, time-consuming statistics are biased, for example, Boost.timer in the program calculation of the time is obviously small, so unified use of Linux command Time to calculate the duration;

Test data and analysis
One, a single process, short and long connections, two RPC framework and two language contrast



Summary:

Overall, long connection performance is better than short connection, the performance gap is more than twice times;
Compared with the two RPC framework of Go language, thrift performance is significantly better than GRPC, and the performance gap is more than twice times.
Compared to the two languages under the Thrift Framework, the RPC performance of Go and C + + is basically at the same magnitude, under short connection, go performance is about twice times of C + +;
Compared to the Tsimpleserver and Tnonblockingserver under Thrift&c++, in a scenario where a single-process client is connected for a long time, Tnonblockingserver has a thread management overhead. Performance is worse than Tsimpleserver, but in short connection, the main overhead is the connection establishment, the thread pool management overhead can be neglected;
Two sets of RPC framework, as well as two major languages are very stable, 5w request time is about 1w times 5 times times;

Second, multi-process (thread, co-path), two large RPC framework and two major language contrast



When writing the RPC interface, the client's RPC files and server-side RPC files must be consistent, keeping the corresponding file directory structure, file name, class name, function name, or the call error occurs.

Service-side interface:
1. Network Event Trigger module: interface server_socket_event_interface//server network Event Trigger interface

{

void listen_failed (const string str); Monitoring failed

void inconming (rpc_socket_base_interface socket); Connection entry

void Disconnectd (rpc_socket_base_interface socket); Connection Disconnect

void write_failed (rpc_socket_base_interface socket); Write failed

void read_failed (rpc_socket_base_interface socket); Read failed

}

2.socket接口调用:        interface rpc_socket_base_interface //socket操作

{

BOOL Dowrite (byte[] data); Write Data

int getfd (); Get FD

String GetIP (); Get IP

String Getport (); Get port

void Disconnect (); Disconnect Connection

}

3.绑定rpc:     rpc_server_impl!(hello) rp_impl; //绑定rpc类接口

Rp_impl = new rpc_server_impl! (hello) (Rp_server); Binding the corresponding service port

Rp_impl.bind_request_callback ("Say", &this.say); The RPC function that corresponds to the binding

    1. RPC function writing:

      void say(rpc_request req)

      {

      Auto resp = new Rpc_response (req); RESP bindings corresponding to the Requst

      String r_s; Type of parameter taken out

      int R_i, r_num;

      Double r_d;

Req.pop (r_s, R_num, R_i, r_d); The corresponding invocation parameters must be taken out and the parameters of the calling side should be identical

WRITEFLN ("hello.say:%s,%s,%s, num:%s,", r_s, R_i, R_d, r_num);

Resp.push (r_s ~ ": Server Response" ~ to!string (R_i), R_num, r_i+1, r_d+0.2); The press-in parameter is returned to the caller

Rp_impl.response (RESP); Returns a parameter to the calling port

}

5. Start the service port:
Auto Rp_server = new Rpc_server (new Server_socket); Socket event corresponding to service port bindings

Auto hello_server_test = new Hello (rp_server); Service port corresponding to RPC class binding

Auto poll = new grouppoll! (); Create a thread management group

Rp_server.listen ("0.0.0.0", 4444, poll); Listen for the port and bind the thread management group

Poll.start (); Start Thread Management

Poll.wait (); Wait for event firing

Client interface:
1. Network Event module:
Interface Client_socket_event_interface//client network event triggering interface

{

void Connectd (rpc_socket_base_interface socket); Connection Successful

void Disconnectd (rpc_socket_base_interface socket); Disconnect Connection

void write_failed (rpc_socket_base_interface socket); Write failed

void read_failed (rpc_socket_base_interface socket); Read failed

}

2.socket接口调用:        interface rpc_socket_base_interface //socket操作

{

BOOL Dowrite (byte[] data); Write Data

int getfd (); Get FD

String GetIP (); Get IP

String Getport (); Get port

void Disconnect (); Disconnect Connection

}

3.绑定rpc:

rpc_client_impl! (hello) Rp_impl; Classes that bind RPC calls

Rp_impl = new rpc_client_impl! (hello) (rp_client); RPC Binding socket

4. Synchronous RPC Calls:

Auto req = new Rpc_request; Create an RPC request

Req.push (S, 1, I, 0.1); Press-In Parameters

Rpc_response resp = rp_impl.sync_call (req); Synchronous Call Server RPC interface

if (Resp.get_status = = Response_status. RS_OK)//Determine if the call was successful

{

String r_s;

int R_i, r_num;

Double r_d;

Resp.pop (r_s, R_num, R_i, r_d); Remove the parameters returned by the server

WRITEFLN ("Server response:%s,%s,%s", r_s, R_i, r_d);

if (r_i% 100000 = = 0)

{

WRITEFLN ("Single Connect test, sync RPC request num:%s, Total time:%s", R_i, Clock.currstdtime (). stdtimetounixtime! ( Long) ()-Start_clock);

}

}

5. Asynchronous Invocation:
Auto req = new Rpc_request; Create an RPC request

Req.push (S, 1, I, 0.1); Press-In Parameters

Rp_impl.async_call (req, delegate (Rpc_response resp) {//Remote asynchronous callback

if (Resp.get_status = = Response_status. RS_OK)//Determine if the call was successful

{

String r_s;

int R_i, r_num;

Double r_d;

Resp.pop (r_s, R_num, R_i, r_d);//Remove the parameters returned by the server

WRITEFLN ("Server response:%s,%s,%s", r_s, R_i, r_d);

if (r_i%20000 = = 0)

{

WRITEFLN ("Single Connect test, RPC request num:%s, Total time:%s", R_i, Clock.currstdtime (). stdtimetounixtime! ( Long) ()-Start_clock);

}

}else

{

Writeln ("error", Resp.get_status);

}

});

6. Client Startup:
Import Kiss.util.Log;

Load_log_conf ("default.conf"); Log configuration

Auto poll = new grouppoll! (); Create a thread management group

Auto client = new Client_socket; Creating a Client socket

Client.connect_to_server (poll); Connect to Server

Poll.start; Start the thread management group

poll.wait; Wait for event firing

Service-side calling code:
1. Pour the header file:
Import Kissrpc.unit;

Import Kissrpc.rpc_server;

Import Kissrpc.rpc_server_impl;

Import Kissrpc.rpc_response;

Import Kissrpc.rpc_socket_base_interface;

Import Kissrpc.rpc_request;

Import Kiss.event.GroupPoll;

2.监听端口:    auto rp_server = new rpc_server(new server_socket);

Auto hello_server_test = new Hello (rp_server);

Auto poll = new grouppoll! ();

Rp_server.listen ("0.0.0.0", 4444, poll);

Poll.start ();

Poll.wait ();

3.绑定rpc事件:    class hello{

This (Rpc_server rp_server)

{

Rp_impl = new rpc_server_impl! (hello) (Rp_server);

Rp_impl.bind_request_callback ("Say", &this.say);

}

shared static int call_count = 0;

void Say (Rpc_request req)

{

Auto resp = new Rpc_response (req);

String r_s;

int R_i, r_num;

Double r_d;

Req.pop (r_s, R_num, R_i, r_d);

WRITEFLN ("hello.say:%s,%s,%s, num:%s,", r_s, R_i, R_d, r_num);

Resp.push (r_s ~ ": Server Response" ~ to!string (R_i), R_num, r_i+1, r_d+0.2);

Rp_impl.response (RESP);

}

rpc_server_impl! (hello) Rp_impl;

}

4.socket Event:
Class Server_socket:server_socket_event_interface

{

void listen_failed (const string str)

{

De_writeln ("Server Listen Failed", str);

}

void Disconnectd (Rpc_socket_base_interface socket)

{

De_writeln ("Client is disconnect");

}

shared static int connect_num;

void inconming (Rpc_socket_base_interface socket)

{

WRITEFLN ("Client inconming:%s:%s, connect num:%s", Socket.getip, Socket.getport, connect_num++);

}

void write_failed (Rpc_socket_base_interface socket)

{

DE_WRITEFLN ("Write buffer to client is failed,%s:%s", Socket.getip, Socket.getport);

}

void read_failed (Rpc_socket_base_interface socket)

{

DE_WRITEFLN ("read buffer from the client is failed,%s:%s", Socket.getip, Socket.getport);

}

}

Client calling Code:
1. Pour the header file:
Import Kissrpc.rpc_request;

Import Kissrpc.rpc_client_impl;

Import kissrpc.rpc_client;

Import Kissrpc.unit;

Import Kissrpc.rpc_response;

Import Kissrpc.rpc_socket_base_interface;

Import Kiss.event.GroupPoll;

2.连接服务器:     import kiss.util.Log;

Load_log_conf ("default.conf");

Auto poll = new grouppoll! ();

for (int i= 0; i< test_client; i++)

{

Auto client = new Client_socket (i);

Client.connect_to_server (poll);

}

Poll.start;

poll.wait;

3.异步调用: auto req = new rpc_request;

Req.push (S, 1, I, 0.1);

Rp_impl.async_call (req, delegate (Rpc_response resp) {//Asynchronous Call interface

if (Resp.get_status = = Response_status. RS_OK)

{

String r_s;

int R_i, r_num;

Double r_d;

Resp.pop (r_s, R_num, R_i, r_d);

WRITEFLN ("Server response:%s,%s,%s", r_s, R_i, r_d);

if (r_i%20000 = = 0)

{

WRITEFLN ("Single Connect test, RPC request num:%s, Total time:%s", R_i, Clock.currstdtime (). stdtimetounixtime! ( Long) ()-Start_clock);

}

}else

{

Writeln ("error", Resp.get_status);

}

});

4. Synchronous Call: Auto req = new Rpc_request;

Req.push (s, num, I, 0.1);

Rpc_response resp = rp_impl.sync_call (req); Synchronous Call Interface

if (Resp.get_status = = Response_status. RS_OK)

{

String r_s;

int r_num;

int R_i;

Double r_d;

Resp.pop (r_s, R_num, R_i, r_d);

WRITEFLN ("hello.say:%s,%s,%s, num:%s", r_s, R_i, R_d, r_num);

finish_num++;

if (R_i = = Test_num)

{

WRITEFLN ("%s Connect test, client num:%s, RPC request num:%s, Total time:%s", R_num, R_num, R_i, Clock.currstdtime (). Stdtim etounixtime! (long) ()-Start_clock);

if (Finish_num = = test_num* test_client)

{

WRITEFLN ("$$$$$$$$$$$%s connect test, client num:%s, RPC request num:%s, Total time:%s", test_client, Test_client, Finish_ Num, Clock.currstdtime (). stdtimetounixtime! (long) ()-Start_clock);

}

}

}else

{

Writeln ("Error,", Resp.get_status);

}

5.socket Event:

class client_socket : client_socket_event_interface

{

This ()

{

Rp_client = new Rpc_client (this);

}

void Connect_to_server (grouppoll! () poll)

{

Rp_client.connect ("0.0.0.0", 4444, poll);

}

void Connectd (Rpc_socket_base_interface socket)

{

DE_WRITEFLN ("Connect to Server,%s:%s", Socket.getip, Socket.getport);

Auto hello_client = new Hello (rp_client);

Start_clock = Clock.currstdtime (). stdtimetounixtime! (long) ();

for (int i= 0; i < test_num; ++i)

{

Hello_client.say ("Test Hello client", i);

}

}

void Disconnectd (Rpc_socket_base_interface socket)

{

DE_WRITEFLN ("Client disconnect ....");

}

void write_failed (Rpc_socket_base_interface socket)

{

DE_WRITEFLN ("Client write failed,%s:%s", Socket.getip, Socket.getport);

}

void read_failed (Rpc_socket_base_interface socket)

{

DE_WRITEFLN ("Client read failed,%s:%s", Socket.getip, Socket.getport);

}

Private

Rpc_client rp_client;

}

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.