Android live broadcast, 1000 lines of Java do not rely on JNI, delay 0.8 to 3 seconds, strong mobile end attack

Source: Internet
Author: User
Tags http post cpu usage

Project home: Https://github.com/ossrs/srs-sea

SRS Server Project: Https://github.com/ossrs/srs

A version that supports the rtmp push stream: Https://github.com/begeekmyfriend/yasea

In the Android version, in particular, the 4.1 introduced Mediacodec to encode the camera's image to achieve live broadcast.

The general Android push to the server, the use of ffmpeg mostly, that is, soft coding, in fact, the use of Android hardware coding will have a better experience.

Read the article on the Internet is also a lot, but there is a lack of a whole run through the scheme, especially how to push the server. This paper combs the process of Android's live streaming.

Androidpublisher put forward a new idea of Android live, mainly with SRS server completed, the advantages are as follows:

Using the system's classes, do not introduce JNI and C library, simple and reliable, 1000 lines of Java code can be completed. Hardware coding, not software coding, system load low, 800kbps encoded CPU usage rate is about 13%. With low latency and rtmp, 0.8 seconds to 3 seconds, the protocol used is the HTTP FLV stream, the same principle as rtmp. installation package small without complex dependencies, compiled apk are only about 1405KB. For easy integration, just introduce a srshttpflv class, go to package and package send, and use it in any app.

There are several big links to live Android:

The

opens camera for preview to obtain YUV image data, which is an uncompressed image.
Set the picture and preview size, calculate the size of the YUV buffer, not simply multiply by 1.5 and should be calculated according to the document.
When you get YUV, you can also preview it as long as you bind to Surfaceholder. The YUV is encoded using MEDIACODEC and Mediaformat, where MEDIACODEC is encoded and Mediaformat is packaged into a ANNEXB package. The
set Mediacodec ColorFormat needs to determine whether MEDIACODEC support, that is, obtaining colorformat from Mediacodec. The YUV image is sent to the inputbuffer of the Mediacodec, and the data that is encoded in the OutputBuffer is obtained, and the format is ANNEXB. When the
is Queueinputbuffer, PTS needs to be specified, otherwise no encoded data output will be discarded. The encoded ANNEXB data is sent to the server. The
typically uses rtmp (LIBRTMP/SRSLIBRTMP/FFMPEG) because the input to the streaming media server is generally rtmp.
If the server supports http-flv stream post, it can be sent directly to the server. Show a running diagram:

Below is the decomposition of each important link. YUV Image The first step, open camera and preview:

                Camera = Camera.open ();

                Camera.parameters Parameters = Camera.getparameters ();
                Parameters.setflashmode (Camera.Parameters.FLASH_MODE_OFF);
                Parameters.setwhitebalance (Camera.Parameters.WHITE_BALANCE_AUTO);
                Parameters.setscenemode (Camera.Parameters.SCENE_MODE_AUTO);
                Parameters.setfocusmode (Camera.Parameters.FOCUS_MODE_AUTO);

                Parameters.setpreviewformat (IMAGEFORMAT.YV12);
                Camera.size Size = null;
                List<camera.size> sizes = parameters.getsupportedpicturesizes (); for (int i = 0; i < sizes.size (); i++) {//log.i (TAG, String.Format ("Camera supported the picture size
                    %dx%d ", Sizes.get (i). width, sizes.get (i). height));
                    if (Sizes.get (i). Width = 640) {size = Sizes.get (i); } parameters.setpicturesize (Size.width, size.height);

                LOG.I (TAG, String.Format ("Set the picture size in%dx%d", Size.width, Size.Height));
                Sizes = parameters.getsupportedpreviewsizes (); for (int i = 0; i < sizes.size (); i++) {//log.i (TAG, String.Format ("Camera supported preview size
                    %dx%d ", Sizes.get (i). width, sizes.get (i). height));
                    if (Sizes.get (i). Width = 640) {vsize = size = Sizes.get (i);
                } parameters.setpreviewsize (Size.width, size.height);

                LOG.I (TAG, String.Format ("Set the preview size in%dx%d", Size.width, Size.Height));

                Camera.setparameters (parameters);
                Set the callback and start the preview.
                Buffer = new Byte[getyuvbuffer (Size.width, size.height)];
                Camera.addcallbackbuffer (buffer);
                Camera.setpreviewcallbackwithbuffer (Onyuvframe);
 try {                   Camera.setpreviewdisplay (Preview.getholder ());
                    catch (IOException e) {log.e (TAG, "preview video failed.");
                    E.printstacktrace ();
                Return LOG.I (TAG, String.Format ("start to preview) in%dx%d, buffer%db", size.width, Size.Height, buffer.
                length)); Camera.startpreview ();

The function to compute the YUV buffer needs to be calculated based on the document rather than simply "*3/2":
    For the buffer for YV12 (Android YUV), @see below:
    //Https://developer.android.com/reference/android/hardware/ Camera.parameters.html#setpreviewformat (int)
    //https://developer.android.com/reference/android/graphics/ IMAGEFORMAT.HTML#YV12
    private int getyuvbuffer (int width, int height) {
        //Stride = ALIGN (width,)
        int stri de = (int) Math.ceil (width/16.0) *;
        Y_size = Stride * height
        int y_size = stride * height;
        C_stride = ALIGN (STRIDE/2)
        int c_stride = (int) Math.ceil (width/32.0) *;
        C_size = c_stride * HEIGHT/2
        int c_size = c_stride * HEIGHT/2;
        Size = y_size + c_size * 2 return
        y_size + c_size * 2;
    }

Image CodingThe second link, set encoder parameters, and start:
                Encoder YUV to 264 es stream. Requires SDK level 16+, Android 4.1, 4.1.1, the Jelly_bean try {encoder = Mediacod
                Ec.createencoderbytype (VCODEC);
                    catch (IOException e) {log.e (TAG, "Create encoder failed.");
                    E.printstacktrace ();
                Return
                } ebi = new Mediacodec.bufferinfo ();

                Presentationtimeus = new Date (). GetTime () * 1000;
                Start the encoder. @see https://developer.android.com/reference/android/media/MediaCodec.html mediaformat format = Mediafo
                Rmat.createvideoformat (MEDIAFORMAT.MIMETYPE_VIDEO_AVC, Vsize.width, vsize.height);
                Format.setinteger (Mediaformat.key_bit_rate, 125000);
                Format.setinteger (Mediaformat.key_frame_rate, 15); Format.setinteger (Mediaformat.key_color_format, Choosecolorformat());
                Format.setinteger (Mediaformat.key_i_frame_interval, 5);
                Encoder.configure (format, NULL, NULL, MEDIACODEC.CONFIGURE_FLAG_ENCODE);
                Encoder.start (); LOG.I (TAG, "encoder start");

Where ColorFormat needs to be selected from the format supported by the encoder, otherwise there will be an unsupported error:
    Choose the right supported color format. @see below://https://developer.android.com/reference/android/media/MediaCodecInfo.html//Https://developer.andr oid.com/reference/android/media/mediacodecinfo.codeccapabilities.html private int Choosecolorformat () {MediaC

        Odecinfo ci = null;
        int nbcodecs = Mediacodeclist.getcodeccount ();
            for (int i = 0; i < nbcodecs i++) {mediacodecinfo MCI = mediacodeclist.getcodecinfoat (i);
            if (!mci.isencoder ()) {continue;
            } string[] types = Mci.getsupportedtypes (); for (int j = 0; J < Types.length; J +) {if (Types[j].equalsignorecase (VCODEC)) {/
                    /LOG.I (TAG, String.Format ("Encoder%s types:%s", Mci.getname (), types[j]));
                    CI = MCI;
                Break
        '} ' int matchedcolorformat = 0; Mediacodecinfo.codeccapabilities cc = Ci.getcapabilitiesfortype (VCODEC);
            for (int i = 0; i < cc.colorFormats.length i++) {int CF = cc.colorformats[i];

            LOG.I (TAG, String.Format ("Encoder%s supports color Fomart%d", ci.getname (), CF));
            Choose YUV for H.264, prefer the bigger one. if (cf >= cc.) Color_formatyuv411planar && CF <= CC.  Color_formatyuv422semiplanar) {if (cf > matchedcolorformat) {matchedcolorformat =
                Cf LOG.I (TAG, String.Format ("Encoder%s Choose Color Format%d", Ci.getname (), Matchedcolo
        Rformat));
    return matchedcolorformat; }

The third link, in the YUV image callback, is given to the encoder and gets the output:
        When got YUV the frame from camera. @see https://developer.android.com/reference/android/media/MediaCodec.html Final Camera.previewcallback ONYUVFR ame = new Camera.previewcallback () {@Override public void Onpreviewframe (byte[] data, Camera came

                RA) {//log.i (TAG, String.Format ("Got YUV image, size=%d", data.length));
                Feed the encoder with YUV frame, got the encoded 264 es stream.
                bytebuffer[] inbuffers = Encoder.getinputbuffers ();
                bytebuffer[] outbuffers = Encoder.getoutputbuffers ();
                    if (true) {int inbufferindex = Encoder.dequeueinputbuffer (-1);
                    LOG.I (TAG, String.Format ("Try to dequeue input buffer, ii=%d", Inbufferindex));
                        if (inbufferindex >= 0) {Bytebuffer BB = Inbuffers[inbufferindex];
                        Bb.clear (); Bb.put (data, 0,Data.length);
                        Long pts = new Date (). GetTime () * 1000-presentationtimeus;
                        LOG.I (TAG, String.Format ("Feed YUV to encode%db, pts=%d", Data.length, pts/1000));
                    Encoder.queueinputbuffer (inbufferindex, 0, Data.length, pts, 0); for (;;)
                        {int outbufferindex = Encoder.dequeueoutputbuffer (ebi, 0);
                        LOG.I (TAG, String.Format ("Try to dequeue output buffer, ii=%d, oi=%d", Inbufferindex, Outbufferindex));
                            if (outbufferindex >= 0) {Bytebuffer BB = Outbuffers[outbufferindex];
                            Onencodedannexbframe (BB, ebi);
                        Encoder.releaseoutputbuffer (Outbufferindex, false);
                        } if (Outbufferindex < 0) {break;
      }
                    }          //to fetch next frame.
            Camera.addcallbackbuffer (buffer); }
        };

MUX to FLV streamAfter obtaining the encoded ANNEXB data, the calling function is sent to the server:
    When got encoded H264 es stream.
    private void Onencodedannexbframe (Bytebuffer es, mediacodec.bufferinfo bi) {
        try {
            Muxer.writesampledata ( Videotrack, ES, bi);
        catch (Exception e) {
            log.e (TAG, "muxer Write sample failed.");
            E.printstacktrace ();
        }
    

This last link is usually sent with librtmp or srslibrtmp, or ffmpeg. If the server can directly support HTTP POST, then it can be sent directly using HttpURLConnection. SRS3 will support http-flv push flow, so just convert the encoded ANNEXB format data into FLV and send it to the SRS server.
The SRS2 supports the HTTP FLV stream Caster, which supports the post of a FLV stream to the server, which is equivalent to rtmp publish. ANNEXB data can be packaged and sent directly using the flvmuxer provided by Android-publisher, for reference: Https://github.com/simple-rtmp-server/android-publisher
The process of ANNEXB packaging is as follows:
        public void writevideosample (final bytebuffer BB, mediacodec.bufferinfo bi) throws Exception {int pts
            = (int) (bi.presentationtimeus/1000);

            int DTS = (int) pts;
            arraylist<srsannexbframe> ibps = new arraylist<srsannexbframe> ();
            int frame_type = Srscodecvideoavcframe.interframe; LOG.I (TAG, String.Format ("video%d/%d bytes, offset=%d, position=%d, pts=%d", bb.remaining (), Bi.size, Bi.offset,

            Bb.position (), pts));
            Send each frame.

                while (Bb.position () < bi.size) {Srsannexbframe frame = Avc.annexb_demux (BB, BI);
                5bits, 7.3.1 NAL unit syntax,//H.264-avc-iso_iec_14496-10.pdf, page 44.
                7:sps, 8:pps, 5:i frame, 1:p frame int nal_unit_type = (int) (Frame.frame.get (0) & 0x1f);
              if (Nal_unit_type = = Srsavcnalutype.sps | | nal_unit_type = = SRSAVCNALUTYPE.PPS) {      LOG.I (TAG, String.Format ("Annexb demux%db, pts=%d, Frame=%db, nalu=%d", Bi.size, pts, Frame.size, Nal_unit_type));
                }//For IDR frame, the frame is keyframe.
                if (Nal_unit_type = = Srsavcnalutype.idr) {frame_type = Srscodecvideoavcframe.keyframe; }//Ignore the Nalu type AUD (9) if (Nal_unit_type = = Srsavcnalutype.accessunitdelimit
                ER) {continue; }//For SPS if (Avc.is_sps (frame)) {byte[] sps = new Byte[frame.siz
                    E];

                    Frame.frame.get (SPS);
                    if (Utils.srs_bytes_equals (H264_sps, SPS)) {continue;
                    } h264_sps_changed = true;
                    H264_sps = SPS;
                Continue
   }//For PPS if (Avc.is_pps (frame)) {                 byte[] pps = new Byte[frame.size];

                    Frame.frame.get (PPS);
                    if (Utils.srs_bytes_equals (H264_pps, pps)) {continue;
                    } h264_pps_changed = true;
                    H264_pps = PPS;
                Continue
                }//IBP frame.
                Srsannexbframe Nalu_header = Avc.mux_ibp_frame (frame);
                Ibps.add (Nalu_header);
            Ibps.add (frame);

            } write_h264_sps_pps (DTS, pts);
        Write_h264_ipb_frame (ibps, Frame_type, DTS, pts); }

As for sending to the server, is actually using the system's HTTP client. The code is as follows:
    private void Reconnect () throws Exception {//when BOS is not NULL, already connected.
        if (Bos!= null) {return;

        } disconnect ();
        URL u = new url (URL);

        conn = (httpurlconnection) u.openconnection ();
        LOG.I (TAG, String.Format ("Worker:connect to SRS by url=%s", url));
        Conn.setdooutput (TRUE);
        Conn.setchunkedstreamingmode (0);
        Conn.setrequestproperty ("Content-type", "Application/octet-stream");
        BOS = new Bufferedoutputstream (Conn.getoutputstream ());

        LOG.I (TAG, String.Format ("Worker:muxer opened, url=%s", url));
                Write 13B Header//9bytes header and 4bytes previous-tag-size byte[] Flv_header = new byte[]{ ' F ', ' L ', ' V ',//Signatures ' FLV ' (byte) 0x01,//File version (for example, 0x01 for FLV V Ersion 1) (byte) 0x00,//4, audio; 1, video;
                5 Audio+video. (byte) 0x00, (Byte) 0x00, (byTE) 0x00, (byte) 0x09,//Dataoffset UI32 The length of this header into bytes (byte) 0x00, (Byte) 0x00, (by
        TE) 0x00, (byte) 0x00};
        Bos.write (Flv_header);
        Bos.flush ();

        LOG.I (TAG, String.Format ("worker:flv header OK"));
    Sendflvtag (Bos, Videosequenceheader); } private void Sendflvtag (Bufferedoutputstream bos, srsflvframe frame) throws IOException {if (frame = = NULL
        ) {return; } if (Frame.frame_type = = srscodecvideoavcframe.keyframe) {log.i (TAG, String.Format ("Worker:got fra
        Me type=%d, dts=%d, size=%db ", Frame.type, Frame.dts, frame.tag.size)); else {//log.i (TAG, String.Format ("Worker:got frame type=%d, dts=%d, size=%db", Frame.type, Frame.dts, Fram
        E.tag.size));
        }//cache the sequence header.
            if (Frame.type = = Srscodecflvtag.video && Frame.avc_aac_type = = Srscodecvideoavctype.sequenceheader) { VideosequEnceheader = frame;
        } if (Bos = NULL | | frame.tag.size <= 0) {return;
        //write the 11B flv tag header bytebuffer th = bytebuffer.allocate (11); Reserved UB [2]//Filter UB [1]//Tagtype UB [5]//datasize UI24 int tag_size = (int ) ((Frame.tag.size & 0X00FFFFFF) |
        ((Frame.type & 0x1F) << 24));
        Th.putint (tag_size); Timestamp UI24//timestampextended UI8 int time = (int) (Frame.dts << 8) & 0xffffff00) |
        ((Frame.dts >>) & 0X000000FF);
        Th.putint (time);
        Streamid UI24 Always 0.
        Th.put ((byte) 0);
        Th.put ((byte) 0);
        Th.put ((byte) 0);

        Bos.write (Th.array ());
        Write the FLV tag data.
        byte[] data = Frame.tag.frame.array ();

        Bos.write (data, 0, frame.tag.size);
        Write the 4 b previous tag size. @remark, we append the tag size, this is Different to SRS which write RTMP packet.
        Bytebuffer pps = bytebuffer.allocate (4);
        Pps.putint ((int) (frame.tag.size + 11));

        Bos.write (Pps.array ());
        Bos.flush (); if (Frame.frame_type = = srscodecvideoavcframe.keyframe) {log.i (TAG, String.Format ("Worker:send frame type=%d
            , dts=%d, Size=%db, tag_size=% #x, time=% #x ", Frame.type, Frame.dts, Frame.tag.size, Tag_size, time
        )); }
    }

All the use of Java code, the final apk compiled only 1405KB, stability is also high many, I have been on the way to work on the road, in addition to the low rate is not very clear, not dead.

Winlin

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.