There are sticky and unpacking mechanisms on the bottom of the TCP programming, because we in the C/s This transmission model, in the TCP protocol transmission, in the network byte is actually like a river, TCP is like a porter, this water from one end to the other end, then divided into two situations:
1 if the client every time the production of more water, that is, we often say that the client to the package is relatively large, TCP this porter will be divided many times to carry.
2 If the client makes less water each time, TCP may wait for clients to produce multiple times, and then transport all the water to the other end.
The first of these is that we need to do the sticky bag, at the other end of the reception, we need to stick to the results we can understand, and in the second case, we have to do the unpacking when we receive them at the other end, because each received message may be a packet sent by another remote end. That are glued together by TCP.
We give a specific scenario for both of these situations:
1 A single send the contents of the packet too much situation, the phenomenon of unpacking:
We first write the client's bootstrap:
Package com.lyncc.netty.stickpackage.myself;
Import Io.netty.bootstrap.Bootstrap;
Import Io.netty.channel.ChannelFuture;
Import Io.netty.channel.ChannelInitializer;
Import io.netty.channel.ChannelOption;
Import Io.netty.channel.ChannelPipeline;
Import Io.netty.channel.EventLoopGroup;
Import Io.netty.channel.nio.NioEventLoopGroup;
Import Io.netty.channel.socket.SocketChannel;
Import Io.netty.channel.socket.nio.NioSocketChannel;
Import Io.netty.handler.codec.LineBasedFrameDecoder;
Import Io.netty.handler.codec.string.StringDecoder;
public class Baseclient {static final String host = System.getproperty ("host", "127.0.0.1");
static final int PORT = Integer.parseint (System.getproperty ("PORT", "8080"));
static final int SIZE = Integer.parseint (System.getproperty ("SIZE", "256"));
public static void Main (string[] args) throws Exception {Eventloopgroup group = new Nioeventloopgroup ();
try {Bootstrap b = new Bootstrap (); B.group(group). Channel (niosocketchannel.class). Option (channeloption.tcp_nodelay,true). h Andler (New channelinitializer<socketchannel> () {@Override public void Initchannel (Socketchannel ch) throws Exception {Channelpipeline p = ch.pipeline ();//P.add
Last (new Linebasedframedecoder (1024));
P.addlast (New Stringdecoder ());
P.addlast (New Baseclienthandler ());
}
});
Channelfuture future = B.connect (HOST, PORT). sync ();
Future.channel (). Writeandflush ("Hello Netty Server, I am a common client");
Future.channel (). Closefuture (). sync ();
finally {group.shutdowngracefully ();
}
}
}
Client's handler:
Package com.lyncc.netty.stickpackage.myself;
Import Io.netty.buffer.ByteBuf;
Import io.netty.buffer.Unpooled;
Import Io.netty.channel.ChannelHandlerContext;
Import Io.netty.channel.ChannelInboundHandlerAdapter; public class Baseclienthandler extends channelinboundhandleradapter{ Private
Byte[] req;
private int counter; Public Baseclienthandler () {// req = ("BAZINGALYNCC is learner" + system.getproperty ("Line.separator"))//
getBytes (); req = ("In this chapter for you", we recommend the Java concurrency in Practice by Brian Goetz. His book w " +" ill give We ' ve reached an exciting point-in the next chapter we ll discuss BootstrappinG, the process " +" of Configuring and connecting all of Netty ' s components to bring your learned about threading models in GE "   ; + "neral and Netty ' s threading model in particular, whose performance and consistency advantages we discuss " & nbsp; + "ed in detail in-chapter", we recommend Java concurrency in Pr Actice by Brian Goetz. Hi " +" s book would give We ' ve reached an exciting point-in the next chapter we ' ll discuss bootstrapping, the ' + "Process of configuring and connecting all of Netty ' s component s to bring your learned about Threading " +" models in General and Netty's threading model in particular, whose performance and consistency Advantag "&NBSP;&NBSP;&NBSP;&NBSP;&NB sp; + "Es we discussed in Detailin this chapter for you, We recommend Java concurrency in Practice by Bri " & nbsp; + "an Goetz." His book would give We ' ve reached an exciting point-in the next chapter;the counter is:1 2222 " &nb
sp; + "Sdsa ddasd asdsadas Dsadasdas"). GetBytes (); } @Override public void channelactive ( Channelhandlercontext ctx) throws Exception { BYTEBUF message = NULL;// for (int i = 0; i < i++) {// &nbs P
Message = Unpooled.buffer (req.length);
message.writebytes (req);
ctx.writeandflush (message); } message = Unpooled.buffer
(Req.length);
message.writebytes (req);
ctx.writeandflush (message);
message = Unpooled.buffer (req.length);
message.writebytes (req);
ctx.writeandflush (message); } @Override public void Channelread ( Channelhandlercontext ctx, Object msg) throws Exception { String buf = (
String) msg; System.out.println ("Now is:" + buf +);
The counter is: "+ ++counter); } @Override public void Exceptioncaught ( Channelhandlercontext CTX, Throwable cause) { ctx.close ();
} }
Service-side serverbootstrap:
Package com.lyncc.netty.stickpackage.myself;
Import Io.netty.bootstrap.ServerBootstrap;
Import Io.netty.channel.ChannelFuture;
Import Io.netty.channel.ChannelInitializer;
Import io.netty.channel.ChannelOption;
Import Io.netty.channel.EventLoopGroup;
Import Io.netty.channel.nio.NioEventLoopGroup;
Import Io.netty.channel.socket.SocketChannel;
Import Io.netty.channel.socket.nio.NioServerSocketChannel;
Import Io.netty.handler.codec.string.StringDecoder;
Import java.net.InetSocketAddress;
public class Baseserver {private int port;
public baseserver (int port) {this.port = port;
public void Start () {Eventloopgroup bossgroup = new Nioeventloopgroup (1);
Eventloopgroup Workergroup = new Nioeventloopgroup (); try {serverbootstrap SBS = new Serverbootstrap (). Group (Bossgroup,workergroup). Channel (Nioserversocketchannel. Class). localaddress (New inetsocketaddress (port)). Childhandler (New Channelinitializer<SocketChannel> () {protected void Initchannel (Socketchannel ch) th
Rows Exception {//Ch.pipeline (). AddLast (New Linebasedframedecoder (1024));
Ch.pipeline (). AddLast (New Stringdecoder ());
Ch.pipeline (). AddLast (New Baseserverhandler ());
};
). Option (Channeloption.so_backlog, 128). Childoption (Channeloption.so_keepalive, true);
Bind port, start receiving incoming connection channelfuture future = Sbs.bind (port). sync ();
System.out.println ("Server start listen at" + port);
Future.channel (). Closefuture (). sync ();
catch (Exception e) {bossgroup.shutdowngracefully ();
Workergroup.shutdowngracefully (); } public static void Main (string[] args) throws Exception {intPort
if (Args.length > 0) {port = Integer.parseint (Args[0]);
else {port = 8080;
New Baseserver (Port). Start ();
}
}
Service-Side handler:
Package com.lyncc.netty.stickpackage.myself;
Import Io.netty.channel.ChannelHandlerContext;
Import Io.netty.channel.ChannelInboundHandlerAdapter;
public class Baseserverhandler extends channelinboundhandleradapter{
private int counter;
@Override public
void Channelread (Channelhandlercontext ctx, Object msg) throws Exception {
string body = (string ) msg;
SYSTEM.OUT.PRINTLN ("Server receive order: + body +"; The counter is: "+ ++counter);
}
@Override public
void Exceptioncaught (Channelhandlercontext ctx, Throwable cause) throws Exception {
Cause.printstacktrace ();
Ctx.close ();
}
As usual, we run the server side first:
We run the client again, after the client starts, we look at the server-side console printout:
We can see the server end of three times received the client two times to send a very long piece of information
2 The packet content of a single send too much situation, sticky phenomenon:
The client and service side of the bootstrap does not change, we modify the client to send the information of the Channelactive code:
Package com.lyncc.netty.stickpackage.myself;
Import Io.netty.buffer.ByteBuf;
Import io.netty.buffer.Unpooled;
Import Io.netty.channel.ChannelHandlerContext;
Import Io.netty.channel.ChannelInboundHandlerAdapter;
public class Baseclienthandler extends channelinboundhandleradapter{private byte[] req;
private int counter; Public Baseclienthandler () {req = (' BAZINGALYNCC is Learner '). GetBytes ();//req = ("In", Chapter you G Eneral, we recommend Java concurrency in Practice by Brian Goetz. His book W "//+" Ill give we ' ve reached a exciting point-in the next Chapter We ' ll discuss bootstrapping, The process "//+" of configuring and connecting all of Netty ' s components to bring your learned about th Reading models in GE "//+" Neral and Netty's threading model in particular, whose performance and Consiste Ncy Advantages We discuss "//+" ed in detail at this chapter your general, we recOmmend Java concurrency in Practice by Brian Goetz. Hi "//+" s book would give we ' ve reached an exciting point-in the next Chapter We ' ll discuss bootstrapping, The "//+" process of configuring and connecting all of Netty ' s components to bring your learned about th Reading "//+" models in general and Netty ' s threading model in particular, whose performance and Consiste Ncy Advantag "//+" Es we discussed in Detailin this chapter your general, we recommend Java concurrency in Practice by Bri '//+ ' an Goetz. His book would give We ' ve reached an exciting point-in the next chapter;the counter is:1 2222 "//+" SDSA D
DASD asdsadas Dsadasdas "). GetBytes (); @Override public void channelactive (Channelhandlercontext ctx) throws Exception {BYTEBUF message =
Null
for (int i = 0; i < i++) {message = Unpooled.buffer (req.length); Message.writeBytes (req);
Ctx.writeandflush (message);
}//message = Unpooled.buffer (req.length);
Message.writebytes (req);
Ctx.writeandflush (message);
Message = Unpooled.buffer (req.length);
Message.writebytes (req);
Ctx.writeandflush (message);
@Override public void Exceptioncaught (Channelhandlercontext ctx, throwable cause) {ctx.close ();
}
}
We start the server side again:
When you start the client, you still see the console on the server side:
It can be seen that the client sent 100 times of information, by the server end of three times to receive, this happened to the phenomenon of sticky packets
These are the typical sticky and unpacking scenarios.