Android Audio Video In depth 16 FFmpeg push mobile phone camera, realize live (with source download)

Source: Internet
Author: User

Source Address
Https://github.com/979451341/RtmpCamera/tree/master

Configure the RMTP server, although said before, here directly paste it.

1. Configuring the RTMP Server

I don't say much about this. Two blogs are on Mac and Windows, and you'll follow
Mac to build rtmp server
https://www.jianshu.com/p/6fcec3b9d644
This is on Windows, rtmp Server Setup (Crtmpserver and Nginx)

Https://www.jianshu.com/p/c71cc39f72ec
2. About the IP address of the push-stream output I'll talk.

I am here is the mobile phone hotspot, the computer connected to the mobile phone, the RTMP server's push stream address has localhost, the server on the computer, For the computer this localhost is 127.0.0.1, but for external such as mobile phone, you can not use localhost, but with this computer in this hotspot is the LAN IP address, not 127.0.0.1 This only represents the IP address of the device node, this you need to go to the phone settings --"More--" mobile network sharing-"portable WLAN hotspot-" Management device list, you can see the computer's LAN IP address

3. Code

We're going to use Surfaceview and camera in this old combination, many without saying, is the camera configuration has to pay attention to

                Camera.Parameters parameters = camera.getParameters();                //对拍照参数进行设置                for (Camera.Size size : parameters.getSupportedPictureSizes()) {                    LogUtils.d(size.width + "  " + size.height);                }

Note that this print out the width of the high, and then set the camera image size configuration must be a group inside, otherwise unable to obtain the camera callback data, this is the key

Parameters.setpicturesize (ScreenWidth, screenheight); Set the size of a photo

There are CPP files in the width of the same way, or the program will crash, in fact, the width of the high we can be processed by scaling, you can use the width of the high, but I do not write here ...

int width = 320;
int height = 240;

Camera Preview Callback

Camera.setpreviewcallback (New Streamit ()); Set the class of the callback

In this callback we transmit the data that needs to be pushed, which is controlled by the isplaying identifier, and we need to click the Start button to start pushing the stream, and the code that transmits the data here is done by opening a single thread to ensure that the last operation is completed before the next one is executed.

public class StreamIt implements Camera.PreviewCallback {    @Override    public void onPreviewFrame(final byte[] data, Camera camera) {        if(isPlaying){            long endTime = System.currentTimeMillis();            executor.execute(new Runnable() {                @Override                public void run() {                    encodeTime = System.currentTimeMillis();                    FFmpegHandle.getInstance().onFrameCallback(data);                    LogUtils.w("编码第:" + (encodeCount++) + "帧,耗时:" + (System.currentTimeMillis() - encodeTime));                }            });            LogUtils.d("采集第:" + (++count) + "帧,距上一帧间隔时间:"                    + (endTime - previewTime) + "  " + Thread.currentThread().getName());            previewTime = endTime;        }    }}

The Initvideo function was previously executed, the FFmpeg was initialized and the push-stream address was transmitted

Calculates the size of the encoded YUV data

yuv_width = width;yuv_height = height;y_length = width * height;uv_length = width * height / 4;

Initializing the component and output encoding environment

av_register_all();//output initializeavformat_alloc_output_context2(&ofmt_ctx, NULL, "flv", out_path);//output encoder initializepCodec = avcodec_find_encoder(AV_CODEC_ID_H264);if (!pCodec) {    loge("Can not find encoder!\n");    return -1;}

Configuring the Encoding Environment

pCodecCtx = avcodec_alloc_context3(pCodec);//编码器的ID号,这里为264编码器,可以根据video_st里的codecID 参数赋值pCodecCtx->codec_id = pCodec->id;//像素的格式,也就是说采用什么样的色彩空间来表明一个像素点pCodecCtx->pix_fmt = AV_PIX_FMT_YUV420P;//编码器编码的数据类型pCodecCtx->codec_type = AVMEDIA_TYPE_VIDEO;//编码目标的视频帧大小,以像素为单位pCodecCtx->width = width;pCodecCtx->height = height;pCodecCtx->framerate = (AVRational) {fps, 1};//帧率的基本单位,我们用分数来表示,pCodecCtx->time_base = (AVRational) {1, fps};//目标的码率,即采样的码率;显然,采样码率越大,视频大小越大pCodecCtx->bit_rate = 400000;//固定允许的码率误差,数值越大,视频越小

Pcodecctx->bit_rate_tolerance = 4000000;
Pcodecctx->gop_size = 50;
/ Some formats want stream headers to be separate. /
if (Ofmt_ctx->oformat->flags & Avfmt_globalheader)
Pcodecctx->flags |= Codec_flag_global_header;

//H264 codec param

Pcodecctx->me_range = 16;
Pcodecctx->max_qdiff = 4;
pcodecctx->qcompress = 0.6;
Maximum and minimum quantization coefficients
Pcodecctx->qmin = 10;
Pcodecctx->qmax = 51;
Optional Param
Number of B-frames allowed between two non-B frames
Setting 0 means not using B-frames
The more B frames, the smaller the picture
Pcodecctx->max_b_frames = 0;

if (pCodecCtx->codec_id == AV_CODEC_ID_H264) {

Av_dict_set (? m, "Preset", "slow", 0);
/**

    • This is very important, if you do not set the delay is very large
    • Ultrafast,superfast, Veryfast, faster, fast, medium
    • Slow, slower, veryslow, placebo. This is an option for x264 encoding speed
      */
      Av_dict_set (? m, "preset", "superfast", 0);
      Av_dict_set (? m, "Tune", "Zerolatency", 0);
      }

Open Encoder

if (avcodec_open2(pCodecCtx, pCodec, ?m) < 0) {    loge("Failed to open encoder!\n");    return -1;}

Create and configure a video stream

video_st = avformat_new_stream(ofmt_ctx, pCodec);if (video_st == NULL) {    return -1;}video_st->time_base.num = 1;video_st->time_base.den = fps;

Video_st->codec = Pcodecctx;
Video_st->codecpar->codec_tag = 0;
Avcodec_parameters_from_context (Video_st->codecpar, pcodecctx);

See if the output URL is valid and write to the file header based on the output format

if (avio_open(&ofmt_ctx->pb, out_path, AVIO_FLAG_READ_WRITE) < 0) {    loge("Failed to open output file!\n");    return -1;}//Write File Headeravformat_write_header(ofmt_ctx, NULL);

The next step is to process the data sent by camera.

Convert data format

jbyte *in = env->GetByteArrayElements(buffer_, NULL);

Get the cached picture size based on the encoder and create a cached picture space

int picture_size = av_image_get_buffer_size(pCodecCtx->pix_fmt, pCodecCtx->width,                                            pCodecCtx->height, 1);uint8_t *buffers = (uint8_t *) av_malloc(picture_size);

Assigns previously created cache picture space to Avframe

pFrameYUV = av_frame_alloc();//将buffers的地址赋给AVFrame中的图像数据,根据像素格式判断有几个数据指针av_image_fill_arrays(pFrameYUV->data, pFrameYUV->linesize, buffers, pCodecCtx->pix_fmt,                     pCodecCtx->width, pCodecCtx->height, 1);

Convert avframe format, Android camera data to NV21 format, here to convert it to yuv420p format

memcpy(pFrameYUV->data[0], in, y_length); //YpFrameYUV->pts = count;for (int i = 0; i < uv_length; i++) {    //将v数据存到第三个平面    *(pFrameYUV->data[2] + i) = *(in + y_length + i * 2);    //将U数据存到第二个平面    *(pFrameYUV->data[1] + i) = *(in + y_length + i * 2 + 1);}pFrameYUV->format = AV_PIX_FMT_YUV420P;pFrameYUV->width = yuv_width;pFrameYUV->height = yuv_height;

Encode Avframe data

Avcodec_send_frame (Pcodecctx, PFRAMEYUV);

Get the data obtained after encoding

Avcodec_receive_packet (Pcodecctx, &ENC_PKT);

Release Avframe

Av_frame_free (&PFRAMEYUV);

Configure the encoded data, set the playback time, etc.

enc_pkt.stream_index = video_st->index;AVRational time_base = ofmt_ctx->streams[0]->time_base;//{ 1, 1000 };enc_pkt.pts = count * (video_st->time_base.den) / ((video_st->time_base.num) * fps);enc_pkt.dts = enc_pkt.pts;enc_pkt.duration = (video_st->time_base.den) / ((video_st->time_base.num) * fps);__android_log_print(ANDROID_LOG_WARN, "eric",                    "index:%d,pts:%lld,dts:%lld,duration:%lld,time_base:%d,%d",                    count,                    (long long) enc_pkt.pts,                    (long long) enc_pkt.dts,                    (long long) enc_pkt.duration,                    time_base.num, time_base.den);enc_pkt.pos = -1;

to push a stream

Av_interleaved_write_frame (Ofmt_ctx, &ENC_PKT);

Release the data transmitted from the camera.

Env->releasebytearrayelements (Buffer_, in, 0);

Finally release all resources

if (video_st)    avcodec_close(video_st->codec);if (ofmt_ctx) {    avio_close(ofmt_ctx->pb);    avformat_free_context(ofmt_ctx);    ofmt_ctx = NULL;}

Use of 4.VLC

In the push flow, input the push stream address, watch the push stream data, the effect is as follows

Android Audio Video In depth 16 FFmpeg push mobile phone camera, realize live (with source download)

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.