Java Distributed System Communication in layman's

Source: Internet
Author: User
Tags response code

What is a distributed system

Before I had an article that had simply introduced distributed communication, interested friends could go and see:

Practice of large-scale Web Site system Architecture (ii) communication between distributed modules

So today I'm going to tell you more about my understanding of the Java Distributed System communication

1. Cluster mode, deploy multiple copies of the same application module

2. Business split mode, split the business into multiple modules and deploy separately

3. Storage distributed

Because the distributed concept is too large, we can narrow down the scope of the discussion:

The following narrow definitions of distribution are:

Business splitting, but not limited to horizontal split, but split the underlying module, function module, upper module, etc.

A system with a wide range of functions and a hierarchy of dependencies, we need to divide it into modules and deploy them separately.

Example:

For example, we are now developing a wallet-like system, then it will have the following function modules: User module (user data),

Application modules (such as mobile phone recharge, etc.), business modules (processing core business), trading modules (transactions with banks),

front-facing module (communicates with client), etc.

We will get a system architecture diagram:

Why distributed

1) The system functions are modular, and deployed in different places, for the underlying module, as long as the interface remains unchanged,

The upper system calls the underlying module will not care about its implementation, and the underlying module makes internal logic changes, the upper system

Do not need to be released again, can be greatly decoupled

2) After decoupling, you can reuse common functions, and business expansion is more convenient, speed up development and release

3) System deployment, the full use of hardware, can improve system performance

4) Reduce the consumption of database connection resources

Distributed Communication Solutions

Scenario: Server-to-server communication

Scenario 1: Socket-based short connection

Scenario 2: Synchronous communication based on long socket connection

Scenario 3: Asynchronous communication based on long socket connection

TCP Short Connection Communication scheme

Defined:

Short Connection: HTTP short connection, or socket short connection, refers to each time the client and the server to communicate, the new

Establish a socket connection, this communication is complete, immediately close the connection, that is, each communication needs to open a new connection.

The transmission diagram is as follows:

IO Communication with Mina implementation

Client Sample code:

Niosocketconnector connector =Newniosocketconnector (); Connector.setconnecttimeoutmillis (connect_timeout);//To set the read buffer, the transmitted content must be less than this bufferConnector.getsessionconfig (). Setreadbuffersize (2048*2048);//Setting up codecsConnector.getfilterchain (). AddLast ("codec",NewProtocolcodecfilter (Newobjectserializationcodecfactory ()));//Set up log filtersConnector.getfilterchain (). AddLast ("Logger",NewLoggingfilter ());//Set HandlerConnector.sethandler (NewMyclienthandler ());//Gets the connection that is executed asynchronouslyConnectfuture future = Connector.connect (Newinetsocketaddress (HOSTNAME, PORT));//wait for connection to be establishedfuture.awaituninterruptibly ();//Get SessionIosession session =future.getsession ();//wait for session to closesession.getclosefuture (). awaituninterruptibly ();//Release connectorConnector.dispose ();

Below we perform the performance test:

Test Scenario:

Business processing time per request 110ms

100 Threads concurrent test, each thread loops the request service side

test Environment:

Client server:

CPU is 4 thread 2400mhz

Server Cpu:4 Thread 3000Mhz

Test Results:

After a 10-minute test, the TPS in stable condition

Around tps:554

Client cpu:30%

Service-Side cpu:230%

Advantages of this scheme:

The program is simple to implement.

of the programme Disadvantages:

1. When the socket sends a message, it needs to be sent to the socket buffer, so the system allocates buffers for each socket

When the buffer is insufficient, the maximum number of connections is reached

2. The large number of connections means that the more the system kernel is called, the socket's accept and close calls

3. Each communication is reopened with a new TCP connection, the handshake protocol consumes time, and TCP is a three-time handshake

4.tcp is slow start, and the performance of TCP data transfer also depends on the lifetime of the TCP connection. The TCP connection is self-tuning over time, initially limiting the maximum speed of the connection and, if the data is successfully transmitted, increases the speed of the transfer over time. This tuning, known as TCP slow start (slow start), is used to prevent internet

Overload and congestion.

TCP Long Connection synchronous communication

Transfer diagram for long connection synchronization

A socket connection can only deliver one requested message at a time

The second request can only start using this channel until after response.

To improve concurrency, you can provide multiple connections, establish a connection pool, and use the flag as being used when the connection is used.

When the connection pool is used up, it is identified as idle, which is the same as the JDBC connection pool.

Assuming a back-end server, TPS is 1000, that is, the number of transactions per second is 1000

The network transfer time is now 5 milliseconds, the business processing a request for 150 milliseconds

It takes 150 milliseconds to request a request from the client to get a response from the server for a total of +5 milliseconds.

=160 milliseconds, if there is only one connection communication, then only 2 business processing can be completed in 1 seconds, that is, TPS is 2

If the TPS is to reach 1000, then theoretically 500 connections are required, but when the number of connections rises, the performance is declining,

Therefore, the program will reduce the throughput of the website.

Meet the Challenge:

Mina Session.write () and receive messages are both asynchronous, you need to block on the main thread to wait for the response to arrive.

Connection Pool Code:

/*** FREE Connection pool*/Private StaticBlockingqueue<connection> Idlepool =NewLinkedblockingqueue<connection>(); /*** Connection pool in use*/ Public StaticBlockingqueue<connection> Activepool =NewLinkedblockingqueue<connection>(); Public StaticConnection Getconn ()throwsinterruptedexception{LongTime1 =System.currenttimemillis (); Connection Connection=NULL; Connection=Idlepool.take ();    Activepool.add (connection); LongTime2 =System.currenttimemillis (); //log.info ("Get Connection Time:" + (TIME2-TIME1));    returnConnection;}

Client code:

 PublicTransinfo Send (transinfo info)throwsinterruptedexception {result result=NewResult (); //Get TCP connectionConnection Connection =connectfuturefactory.getconnection (Result); Connectfuture connectfuture=connection.getconnection (); Iosession Session=connectfuture.getsession (); Session.setattribute ("Result", result); //Send MessageSession.write (info); //synchronous blocking Get responseTransinfo Syngetinfo =Result.syngetinfo (); //instead of actually closing the connection, put the connection back into the connection poolConnectfuturefactory.close (Connection,result); returnSyngetinfo;}

Block get service-side response code:

 Public synchronizedtransinfo Syngetinfo () {//wait for message to return//must be executed in the case of synchronization    if(!Done ) {        Try{wait (); } Catch(interruptedexception e) {log.error (E.getmessage (), E); }    }    returninfo;} Public synchronized voidsynsetinfo (transinfo info) { This. info =info;  This. Done =true; Notify ();}

Test Scenario:

Business processing time per request 110ms

300 Threads 300 connections concurrent test, each thread loops request Server

test Environment:

Client server:

CPU is 4 thread 2400mhz

Server Cpu:4 Thread 3000Mhz

Test Results:

After a 10-minute test, the TPS in stable condition

Around tps:2332

Client cpu:90%

Service-Side cpu:250%

As can be seen from the test results, when the number of connections is large enough, the system performance will be reduced, the number of open TCP connections, then

The system overhead will be greater.

TCP Long Connection asynchronous communication

Communication diagram:

A socket connection transmits multiple requests of information at the same time, the input channel receives multiple response messages, and the message is continuously issued, continuously retracted.

Business processing and sending messages are asynchronous, and a business thread tells the channel to send a message, no longer occupies the channel, but waits for the response to arrive, while the other

The business thread can also send messages to the connection channel, so that the channel can be utilized to communicate.

Meet the challenge

However, the scheme complicates the coding, such as the request Request1,request2,request3 order, but the service-side processing request is not

Queued, but parallel processing, it is possible request3 before REQUEST1 response to the client, then a request will not find his response,

At this time we need to add a unique identifier in the request and response message, such as the serial number of the communication, in a communication channel remains unique,

Then the corresponding response message can be obtained according to the serial number.

My plan is to:

1. Client gets a TCP connection

2. Call Session.write () to send a message and save a unique sequence number to a result object

The result object is stored in a map

3. Synchronous blocking gets the result, the thread synchronously blocks on the result object

4. Receive the message and get the result object from the map with a unique serial number and wake up the thread blocking on the result object

The client sends the message sample code:

 PublicTransinfo Send (transinfo info)throwsinterruptedexception {result result=NewResult ();    Result.setinfo (info); //Get socket ConnectionConnectfuture connectfuture =connectfuturefactory. getconnection (Result); Iosession Session=connectfuture.getsession (); //Place result into ConcurrenthashmapConcurrenthashmap<long, result> Resultmap = (Concurrenthashmap<long, result>) Session.getAttribute (" Resultmap ");    Resultmap.put (Info.getid (), result); //Send MessageSession.write (info); //synchronous blocking Get results    returnresult.syngetinfo ();}

Synchronous blocking and Wakeup methods:

 Public synchronizedtransinfo Syngetinfo () {//wait for message to return//must be executed in the case of synchronization     while(!Done ) {        Try{wait (); } Catch(interruptedexception e) {log.error (E.getmessage (), E); }    }    returninfo;} Public synchronized voidsynsetinfo (transinfo info) { This. info =info;  This. Done =true; Notify ();}

Sample code to receive the message:

 public  void   Messagereceived (iosession session, Object message)  throws   Exception {transinfo info  = (        TRANSINFO) message;     //  get result from Resultmap based on unique serial number  Concurrenthashmap<long, result> Resultmap = (Concurrenthashmap<long, result>) Session.getAttribute ("    Resultmap ");  //  remove result  result result = Resultmap.remove (Info.getid ());      //  wake up blocking thread   

Test Scenario:

Business processing time per request 110ms

300 Threads 10 connections concurrent test, each thread loops request Server

test Environment:

Client server:

CPU is 4 thread 2400mhz

Server Cpu:4 Thread 3000Mhz

Test Results:

After a 10-minute test, the TPS in stable condition

Around tps:2600

Client cpu:25%

Service-Side cpu:250%

It has been tested that asynchronous communication can achieve the same efficient communication with fewer TCP connections, which greatly reduces the system performance overhead.

Write here for the time being today.

Reference articles

Http://www.2cto.com/os/201203/125511.html

Wireshark-win32-1.6.5.exe:

http://down.51cto.com/data/685517

The difference between RPC and Message Queuing

Http://oldratlee.com/post/2013-02-01/synchronous-rpc-vs-asynchronous-message

The difference between a TCP long connection and a short connection

Http://www.cnblogs.com/liuyong/archive/2011/07/01/2095487.html

Http://blog.chinaunix.net/uid-354915-id-3587924.html

Keep-alived detailed

Http://wudi.in/archives/446.html

http://www.nowamagic.net/academy/detail/23350305

Wireshark Grasping bag detailed

Http://www.cnblogs.com/TankXiao/archive/2012/10/10/2711777.html

Long connection, synchronous asynchronous reference

http://www.yeolar.com/note/2012/11/10/c10k/

Synchronization queue:

http://ifeve.com/java-synchronousqueue/

Netty

Http://www.infoq.com/cn/articles/netty-reliability

Java Distributed System Communication in layman's

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.