"Netty4 Simple Project Practice" Xi. distributing Mpegts to WebSocket interface with Netty

Source: Internet
Author: User

Objective

When the video stream is pushed, the rtmp will have a delay of 3 seconds. At present, one solution is to resolve it in a Mpegts format. If you consider using FFmpeg to push the flow, you can use the HTTP format and UDP format to push the stream. The thing to do now is to forward the rtmp to the WebSocket interface with Netty, and then play with H5. The playback plugin is implemented using the JSMPEG project.


"FFmpeg push Mpegts"

The FFmpeg push stream supports HTTP and UDP two protocols and does not currently support the websocket approach. So I intend to use Netty to do protocol forwarding. Assuming that the local receive stream address is http://localhost:9090 the screen on the Mac, you can use the following command

Ffmpeg-f avfoundation-i "1"-vcodec libx264-f mpegts-codec:v Mpeg1video-b 0 http://localhost:9090

If you are pushing UDP, suppose you are pushing to localhost, port 9094, you can use the following command

Ffmpeg-f avfoundation-i "1"-vcodec libx264-f mpegts-codec:v Mpeg1video-b 0 udp://localhost:9094


The above two methods are not included in the audio encoding, if you want to include audio, you need to specify the audio mode

Ffmpeg-f avfoundation-i "1"-vcodec libx264-f mpegts-codec:v mpeg1video-acodec libfaac-b 0 udp://localhost:9094

"Try HTTP Push Stream"

According to Jsmpeg's tutorial, he pushed the ffmpeg stream to the Nginx, let Nginx forward to the WebSocket, and did not modify the Nginx module, so I think if the ffmpeg pushed the data buff directly to the websocket should be feasible. At least HTTP is feasible. So down to do is the HTTP message to remove the head, only forwarding response body on it. Is that really true?


I built a bootstrap with Netty, which includes the eventloopgroup of listening and the eventloopgroup of transmission, which is a typical service bootstrap type

Serverbootstrap, and then configure TCP mode to set the TCP Nodelay, send and receive buffer size parameters

            Serverbootstrap bootstrap = new Serverbootstrap ();
            Bootstrap
                . Option (Channeloption.connect_timeout_millis, 3000)
                . Option (Channeloption.so_sndbuf, 1024*256)
                . Option (Channeloption.so_rcvbuf, 1024*256)
                . Option (Channeloption.tcp_nodelay, true);

After not loading any codec, directly to the processing of handler plus.

Bootstrap.group (Bossloop, Workerloop)
             . Childhandler (New channelinitializer<socketchannel> () {
                @ Override
                protected void Initchannel (Socketchannel ch) throws Exception {channelpipeline
                    p = ch.pipeline ();
                    P.addlast ("Logging", New Logginghandler (Loglevel.debug));
                    if (sslctx!= null) {
                        p.addlast (Sslctx.newhandler (Ch.alloc ()));
                    P.addlast (New Mpegtshandler ());
                }
             );

This processing handler is also super simple, that is, the distribution of messages to the WS in the connection group is finished. Note that handler is BYTEBUF and does not need to be parsed into an HTTP protocol.

public class Mpegtshandler extends simplechannelinboundhandler<bytebuf>{   
    
    @Override
    protected void ChannelRead0 (Channelhandlercontext ctx, Bytebuf msg) throws Exception {
       //forwarding byte stream
        playergroup.broadcast (msg);
    }
}
And one of the Playergroup, is a channelgroup. Note that it was retain at the time of the broadcast.

public class Playergroup {
    static private channelgroup channelgroup
        = new Defaultchannelgroup ( immediateeventexecutor.instance);
    
    static public void AddChannel (Channel Channel) {
        channelgroup.add (Channel);
    }

    static public void Removechannel (Channel Channel) {
        channelgroup.remove (Channel);
    }

    static public void Broadcast (BYTEBUF message) {
        if channelgroup = null | | channelgroup.isempty ()) {
            Return;
  }
        binarywebsocketframe frame = new Binarywebsocketframe (message);
        Message.retain ();

        Channelgroup.writeandflush (frame);

    static public void Destory () {
        if (Channelgroup = null | | channelgroup.isempty ()) {return
            ;
        }
        Channelgroup.close ();
    }
This can be played. It just contains a lot of HTTP header bytes. As for the HTTP head removed can not be done, did not try. Because regardless of whether or not the forwarding is removed, in the receiving time is also a lot of overhead, so I go to the back of the UDP way of the push stream.


"UDP push-stream processing"

UDP server to use Niodatagramchannel, also do not need to codec module, directly with the processing handler

 Eventloopgroup bossloop = null;

            try {bossloop = new Nioeventloopgroup ();
            Bootstrap Bootstrap = new Bootstrap ();
            
            Bootstrap.channel (Niodatagramchannel.class);
                Bootstrap. Group (bossloop). Option (Channeloption.so_broadcast, TRUE)//support broadcast

            . option (CHANNELOPTION.SO_SNDBUF, 1024 * 256). OPTION (CHANNELOPTION.SO_RCVBUF, 1024 * 256);

            Bootstrap. Handler (new Udpmpegtshandler ());

            Channelfuture future = Bootstrap.bind (port). sync ();
            if (future.issuccess ()) {System.out.println ("UDP stream server start at Port: + port +.");
        } future.channel (). Closefuture (). await (); The catch (Exception e) {} finally {if (Bossloop!= null) {Bossloop.shutdowngracefull
            Y (); }
        }
    

Handler is also super simple, the only thing to note is that handler received the data type is no longer bytebuf, but Datagrampacket

public class Udpmpegtshandler extends simplechannelinboundhandler<datagrampacket> {

    @Override
    protected void channelRead0 (Channelhandlercontext ctx, Datagrampacket msg) throws Exception {
        Playergroup.broadcast (Msg.content ());
    }
The same is forwarded to the group to broadcast. The only thing to be aware of is to remove the bytebuf from the Datagrampacket.


"WebSocket"

The WebSocket phase is slightly different from the WS written earlier. Here is the WS transfer binary, so the WS data format is binarywebsocketframe. The codec that the server loads looks like this:

 Bootstrap.localaddress (New Inetsocketaddress (port)). Childhandler (New channelinitializer<channel> () { @Override protected void Initchannel (Channel ch) throws Exception {ch. Pipeline (). AddLast ("ReadTimeout", New Readtimeouthandler (45));
                    Long time does not write will break Ch.pipeline (). AddLast ("Httpservercodec", New Httpservercodec ());
                    Ch.pipeline (). AddLast ("Chunkedwriter", New Chunkedwritehandler ());
                    Ch.pipeline (). AddLast ("Httpaggregator", New Httpobjectaggregator (65535)); Ch.pipeline (). AddLast ("Wsprotocolhandler", New Websocketserverprotocolhandler (Websocket_path, "Hao
                    Fei ", true)); Ch.pipeline (). AddLast ("Wsbinarydecoder", New Websocketframedecoder ()); WS decodes into bytes ch.pipeline (). AddLast ("Wsencoder", New Websocketframeprepender ()); Bytes encoded as WS Ch.pipeline (). AddLast (New VideoplayerhanDler ()); }
            });

Websocketframedecoder is a custom decoder. Websocketframeprepender is a custom encoder. Videoplayerhandler is a custom processing handler. First look at the decoder websocketframedecoder, here I use direct memory.

public class Websocketframedecoder extends messagetomessagedecoder<websocketframe> {
    @Override
    protected void decode (Channelhandlercontext ctx, Websocketframe msg, list<object> out) throws Exception {
        Bytebuf buff = msg.content ();
        byte[] messagebytes = new byte[buff.readablebytes ()];
        Buff.readbytes (messagebytes);
        TODO: Direct Memory carefully
        bytebuf bytebuf = PooledByteBufAllocator.DEFAULT.buffer ();//Direct Memory
        bytebuf.writebytes ( messagebytes);
        Out.add (Bytebuf.retain ());
    }
Then look at the encoder and encode the binary stream as before, not the character streams.

public class Websocketframeprepender extends messagetomessageencoder<bytebuf> {
    @Override
    protected void encode (Channelhandlercontext ctx, Bytebuf msg, list<object> out) throws Exception {Websocketframe
        Websocketframe = new Binarywebsocketframe (msg);
        Out.add (Websocketframe);
    }
The final handler is also super simple, is to add channel to group inside

public class Videoplayerhandler extends simplechannelinboundhandler<bytebuf> {
    
    @Override public
    Void Channelactive (Channelhandlercontext ctx) throws Exception {
        ServerLogger.log ("WS connection CTX:" + ctx);
        Playergroup.addchannel (Ctx.channel ());
    }


This way through the channelgroup through the UDP to WS channel.

Conclusion

The effect of UDP is better, when pushing HTTP often spend screen, finally with the JSMPEG option, if the playback effect is not good, you need to adjust the jsmpeg options.

<! DOCTYPE html> 

Note Config inside the protocols, remember and WS in the server configuration consistent

Ch.pipeline (). AddLast ("Wsprotocolhandler",
                        new Websocketserverprotocolhandler (Websocket_path, "Haofei", True) );
This sub-protocol can be used for the authentication of the horse, but remember not to fill in the "" is the empty string, because in the Netty not the empty string as a child protocol. I don't know if this is a Netty bug.





Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.