Android audio and video goes deep into the 16 FFmpeg streaming mobile phone camera to achieve live broadcast (with source code download), androidffmpeg
Source Code address
Https://github.com/979451341/RtmpCamera/tree/master
After configuring the RMTP server, paste it here.
1. Configure the RTMP Server
I don't want to post two blogs on mac and windows respectively.
Set up an RTMP server on MAC
Https://www.jianshu.com/p/6fcec3b9d644
This is on windows, and the RTMP server is set up (crtmpserver and nginx)
Https://www.jianshu.com/p/c71cc39f72ec
2. Let me talk about the IP address of the streaming output.
I started the hotspot on my mobile phone and connected my computer to my mobile phone. The stream pushing address of this RTMP server is localhost, and the server is on the computer. The localhost on the computer is 127.0.0.1, but for the outside, such as the mobile phone, you cannot use localhost, but the IP address of the computer in this hotspot, that is, the IP address of the LAN, not 127.0.0.1, which only represents the IP address of the local device node, you need to go to Mobile Phone settings -- more -- mobile network sharing -- Portable WLAN hotspot -- manage the device list to see the lan ip address of the computer.
3. Code
We need to use SurfaceView and Camera, which are the old combinations. If you do not need to mention them, you need to pay attention to the configuration of Camera.
Camera. parameters parameters = camera. getParameters (); // set the Camera parameters for (Camera. size size: parameters. getSupportedPictureSizes () {LogUtils. d (size. width + "" + size. height );}
Pay attention to the width and height printed in this section. Later, you must set the image size configured for Camera to be one of them. Otherwise, you cannot obtain the callback data of Camera. This is critical.
Parameters. setPictureSize (screenWidth, screenHeight); // you can specify the Photo size.
The width and height in the cpp file should also be like this, otherwise the program will crash. In fact, we can use proportional scaling to process the width and height here, and we can use the width and height as needed, but I did not write .........
int width = 320;int height = 240;
Camera preview callback
Camera. setPreviewCallback (new StreamIt (); // sets the callback class
We transfer the data that needs to be pushed in this callback. Here we control the data through the isPlaying identifier. We need to click the start button to start streaming, in addition, the data transmission code is completed by enabling a single thread to ensure that the next operation will be executed only after the previous operation is completed.
Public class StreamIt implements Camera. previewCallback {@ Override public void onPreviewFrame (final byte [] data, Camera camera) {if (isPlaying) {long endTime = System. currentTimeMillis (); executor.exe cute (new Runnable () {@ Override public void run () {encodeTime = System. currentTimeMillis (); FFmpegHandle. getInstance (). onFrameCallback (data); LogUtils. w ("encoding number:" + (encodeCount ++) + "frame, time consumed:" + (System. currentTimeMillis ()-encodeTime) ;}}); LogUtils. d ("collection number:" + (++ count) + "frame, interval from the previous frame:" + (endTime-previewTime) + "" + Thread. currentThread (). getName (); previewTime = endTime ;}}}
Previously, the initVideo function was executed to initialize FFmpeg and transmit the streaming address.
Calculate the size of the encoded yuv data
yuv_width = width; yuv_height = height; y_length = width * height; uv_length = width * height / 4;
Initialize the component and output encoding Environment
av_register_all(); //output initialize avformat_alloc_output_context2(&ofmt_ctx, NULL, "flv", out_path); //output encoder initialize pCodec = avcodec_find_encoder(AV_CODEC_ID_H264); if (!pCodec) { loge("Can not find encoder!\n"); return -1; }
Configure the encoding Environment
PCodecCtx = avcodec_alloc_context3 (pCodec); // the id of the encoder. Here it is the 264 encoder. You can assign the pCodecCtx-> codec_id = pCodec-> id according to the codecID parameter in video_st; // pixel format, that is, the color space used to indicate a pixel pCodecCtx-> pix_fmt = AV_PIX_FMT_YUV420P; // the Data Type pCodecCtx-> codec_type = AVMEDIA_TYPE_VIDEO; // The size of the video frame of the encoding target, in pixels: pCodecCtx-> width = width; pCodecCtx-> height = height; pCodecCtx-> framerate = (AVRational) {fps, 1}; // The basic unit of frame rate. We use scores for representation. pCodecCtx-> time_base = (AVRational) {1, fps}; // The target bit rate, that is, the sampling bit rate. Obviously, the larger the sampling bit rate, the larger the video size. pCodecCtx-> bit_rate = 400000; // fixed allowable bit rate error. The larger the value, smaller video // pCodecCtx-> bit_rate_tolerance = 4000000; pCodecCtx-> gop_size = 50;/* Some formats want stream headers to be separate. */if (ofmt_ctx-> oformat-> flags & AVFMT_GLOBALHEADER) pCodecCtx-> flags | = CODEC_FLAG_GLOBAL_HEADER; // H264 codec param // pCodecCtx-> me_range = 16; // pCodecCtx-> max_qdiff = 4; pCodecCtx-> qcompress = 0.6; // maximum and minimum quantization coefficient pCodecCtx-> qmin = 10; pCodecCtx-> qmax = 51; // Optional Param // number of B frames allowed between two non-B frames // The value 0 indicates that no B frame is used. // The more B frames, smaller image size: pCodecCtx-> max_ B _frames = 0; if (pCodecCtx-> codec_id = AV_CODEC_ID_H264) {// av_dict_set (primary m, "preset", "slow", 0 ); /*** this is very important. If the latency is not set to very large * ultrafast, superfast, veryfast, faster, fast, medium * slow, slower, veryslow, placebo. this is the x264 encoding speed option */av_dict_set (cost m, "preset", "superfast", 0); av_dict_set (cost m, "tune", "zerolatency ", 0 );}
Enable Encoder
if (avcodec_open2(pCodecCtx, pCodec, ¶m) < 0) { loge("Failed to open encoder!\n"); return -1; }
Create and configure a video stream
video_st = avformat_new_stream(ofmt_ctx, pCodec); if (video_st == NULL) { return -1; } video_st->time_base.num = 1; video_st->time_base.den = fps;// video_st->codec = pCodecCtx; video_st->codecpar->codec_tag = 0; avcodec_parameters_from_context(video_st->codecpar, pCodecCtx);
Check whether the output url is valid and write the file header according to the output format.
if (avio_open(&ofmt_ctx->pb, out_path, AVIO_FLAG_READ_WRITE) < 0) { loge("Failed to open output file!\n"); return -1; } //Write File Header avformat_write_header(ofmt_ctx, NULL);
The next step is to process the data transmitted by Camera.
Convert data format
jbyte *in = env->GetByteArrayElements(buffer_, NULL);
Obtain the cached image size based on the encoder and create the cached image space.
int picture_size = av_image_get_buffer_size(pCodecCtx->pix_fmt, pCodecCtx->width, pCodecCtx->height, 1); uint8_t *buffers = (uint8_t *) av_malloc(picture_size);
Grant the previously created cached image space to AVFrame
PFrameYUV = av_frame_alloc (); // assign the buffers address to the image data in AVFrame. Several data pointers av_image_fill_arrays (pFrameYUV-> data, pFrameYUV-> linesize, buffers, pCodecCtx-> pix_fmt, pCodecCtx-> width, pCodecCtx-> height, 1 );
Convert the AVFrame format. Zhuo's camera data is in NV21 format. convert it to YUV420P format.
Memcpy (pFrameYUV-> data [0], in, y_length); // Y pFrameYUV-> pts = count; for (int I = 0; I <uv_length; I ++) {// store v data to the third plane * (pFrameYUV-> data [2] + I) = * (in + y_length + I * 2 ); // save U data to the second plane * (pFrameYUV-> data [1] + I) = * (in + y_length + I * 2 + 1 );} pFrameYUV-> format = AV_PIX_FMT_YUV420P; pFrameYUV-> width = yuv_width; pFrameYUV-> height = yuv_height;
Encode AVFrame data
Avcodec_send_frame (pCodecCtx, pFrameYUV );
Get the encoded data
Avcodec_receive_packet (pCodecCtx, & enc_pkt );
Release AVFrame
Av_frame_free (& pFrameYUV );
Configure the encoded data and set the playback time.
enc_pkt.stream_index = video_st->index; AVRational time_base = ofmt_ctx->streams[0]->time_base;//{ 1, 1000 }; enc_pkt.pts = count * (video_st->time_base.den) / ((video_st->time_base.num) * fps); enc_pkt.dts = enc_pkt.pts; enc_pkt.duration = (video_st->time_base.den) / ((video_st->time_base.num) * fps); __android_log_print(ANDROID_LOG_WARN, "eric", "index:%d,pts:%lld,dts:%lld,duration:%lld,time_base:%d,%d", count, (long long) enc_pkt.pts, (long long) enc_pkt.dts, (long long) enc_pkt.duration, time_base.num, time_base.den); enc_pkt.pos = -1;
Streaming
Av_interleaved_write_frame (ofmt_ctx, & enc_pkt );
Release data transmitted by Camera
Env-> ReleaseByteArrayElements (buffer _, in, 0 );
Finally release all resources
if (video_st) avcodec_close(video_st->codec); if (ofmt_ctx) { avio_close(ofmt_ctx->pb); avformat_free_context(ofmt_ctx); ofmt_ctx = NULL; }
4. Use of VLC
During streaming, enter the streaming address to view streaming data. The effect is as follows: