iOS platform FFmpeg-based video streaming technology secrets

Source: Internet
Author: User

Now very popular live, I believe a lot of people are like me very curious how this technology is implemented, just recently in a ffmpeg project, found that the tool is easy to do live, the following to share the following technical points:

First you have to compile the ffmpeg to run the required static library, this Baidu has a lot of content, here I will not say, the proposal can be used on GitHub, an open source script compiled, simple and brutal efficiency.

Address: Github-kewlbear/ffmpeg-ios-build-script:shell scripts to build FFmpeg for IOS and TvOS

After downloading, run the build-ffmpeg.sh script directly with terminal, and compile it in about half an hour ... Anyway, I think the speed is ok (PS: Originally compiled Android source code that called a slow ah ...), if the error is repeated, until the prompt success.

How is the live video broadcast? The approximate flowchart is as follows:

1. Live person device side: Get the video stream from the camera and submit to the server using the RTMP service

2. Server-side: receive the RTMP video stream submitted by the live person, and provide the rtmp source for viewers

3. Viewer: Play the video of the rtmp source with the player.

PS:RTMP is the acronym for Real Time Messaging Protocol (Live Message Transfer Protocol). The protocol is based on TCP and is a protocol family, including various variants such as RTMP Basic protocol and Rtmpt/rtmps/rtmpe.

Pre-Preparation:

Create a new project, introduce all the static libraries and other related libraries that need to be introduced into the project, configure the header file search path, there are many tutorials on the web that do not repeat the ffmpeg.

I am the latest version compiled with the above script, for later use, you need to add these C files to the project:

Cmdutils_common_opts.hcmdutils.h and Cmdutils.cconfig.h in the scratch directory to take a corresponding platform Ffmpeg_filter.cffmpeg_opt.cffmpeg_ Videotoolbox.cffmpeg.h and FFMPEG.C

In addition to the Config.h file, other files are in the ffmpeg-3.0 source directory

Note the problem:

1. The compilation will error, because the FFMPEG.C file contains the main function, please rename the function to Ffmpeg_main and add the declaration of the Ffmpeg_main function in Ffmpeg.h.

The 2.ffmpeg task finishes and the iOS device is a single-process multithreaded task, so the Exit_program method in the cmdutils.c file needs to be

Exit (ret);

Instead of ending the thread, you need to introduce # include

Pthread_exit (NULL);

Live: use FFmpeg library to capture the camera information of the live person device, generate the bare stream stream, note!!! Here is the bare stream, what does the bare stream mean? Just doesn't include PTS (Presentation time Stamp. PTS are primarily used to measure when decoded video frames are displayed, and DTS (Decode time Stamp). DTS is primarily a stream of information that identifies when a bit stream in memory is started to decode in the decoder, and the player is unable to play the stream. This client only needs to upload this data stream to the server with the RTMP protocol.

How to get camera information:

Using the Libavdevice library to open the input stream for the camera, getting the input stream of the camera in FFmpeg is similar to opening the file input stream, sample code:

//Open a file:Avformatcontext*pformatctx =Avformat_alloc_context (); Avformat_open_input (&pformatctx,"test.h264", null,null);//get camera Input:Avformatcontext*pformatctx =Avformat_alloc_context ();//This step of finding the input device is moreAvinputformat*ifmt=av_find_input_format ("Vfwcap");//Select the first input setting of the Vfwcap type as the input streamAvformat_open_input (&pformatctx,0, ifmt,null);

How to upload a video stream using rtmp:

The instructions for uploading files using rtmp are:

Run the directive directly using the Ffmpeg_main method in Ffmpeg.c, sample code:

Nsstring*command =@"ffmpeg-re-i temp.h264-vcodec copy-f flv rtmp://xxx/xxx/livestream";//splitting instructions into instruction arrays based on spacesNsarray*argv_array=[command_strcomponentsseparatedbystring: (@" ")];//converts an OC object to the corresponding C objectINTARGC=(int) Argv_array.count;Char* * argv= (Char* *) malloc (sizeof(Char*)*argc); for(inti=0; I{argv[i]=(Char*) malloc (sizeof(Char)*1024x768); strcpy (argv[i],[[argv_arrayobjectatindex:i]utf8string]);}//number of incoming instructions and array of instructionsFfmpeg_main (ARGC,ARGV);//the thread has been killed and the code below will not executeFFmpeg-re-itemp.h264-vcodec copy-f flvrtmp://Xxx/xxx/livestream

This line of code is

The-re parameter is sent according to the frame rate, otherwise the ffmpeg will be sent at the highest rate, then the video will be suddenly and slowly,

-itemp.h264 is a bare h264 stream that needs to be uploaded

-vcoder Copy This section is a copy of the source is not changed

-F Flvrtmp://xxx/xxx/livestream is the specified format for FLV sent to this URL

Here we see the input is a bare stream or a file, but what we get from the camera is a direct memory stream, how to solve it?

Of course there's a way.

1. Change the temp.h264 parameter in this string parameter to NULL

2. Initialize the custom Aviocontext, specifying a custom callback function. The sample code is as follows:

Nsstring*command =@"ffmpeg-re-i temp.h264-vcodec copy-f flv rtmp://xxx/xxx/livestream";//splitting instructions into instruction arrays based on spacesNsarray*argv_array=[command_strcomponentsseparatedbystring: (@" ")];//converts an OC object to the corresponding C objectINTARGC=(int) Argv_array.count;Char* * argv= (Char* *) malloc (sizeof(Char*)*argc); for(inti=0; I{argv[i]=(Char*) malloc (sizeof(Char)*1024x768); strcpy (argv[i],[[argv_arrayobjectatindex:i]utf8string]);}//number of incoming instructions and array of instructionsFfmpeg_main (ARGC,ARGV);//the thread has been killed and the code below will not executeFFmpeg-re-itemp.h264-vcodec copy-f flvrtmp://Xxx/xxx/livestream

3. Write the callback function yourself and fetch the data from the input source. The sample code is as follows:

//CallbackIntread_buffer (void*opaque, uint8_t *buf,intbuf_size) {//hibernate, or it will all be sent at onceif(pkt.stream_index==Videoindex) {avrational Time_base=ifmt_ctx->streams[videoindex]->time_base; Avrational time_base_q={1, av_time_base};int64_t pts_time=av_rescale_q (Pkt.dts, Time_base, time_base_q); int64_t now_time= Av_gettime ()-start_time;if(Pts_time >now_time) av_usleep (Pts_time-now_time);}//Fp_open Replace with the camera input streamif(!feof (Fp_open)) {Inttrue_size=fread (BUF,1, Buf_size,fp_open); returntrue_size;}Else{return-1;}}

Service side: Forgive me a mobile development does not understand the server side, presumably should be to get live stream upload video streaming and then broadcast. So just skip it.

Play side: The player is actually a player, there are a lot of solutions, here is the simplest, because a lot of live software playback and the client are the same software, so here directly using the project is already in the ffmpeg to play simple rough and convenient.

On GitHub There is a third-party player based on the FFmpeg Kxmovie, just use this as well.

Address: Github-kolyvan/kxmovie:movie player for IOS using FFmpeg

When you add the Kxmovie Player section to the previous upload section, you will find an error ...

The result of the lookup is that the Avpicture_deinterlace method used by Kxmovie does not exist, my first idea is to try to block this method so that the program can use it normally, the result ... Of course, can not play video, 100 degrees to find this method incredibly is to stagger, although my video is not rich enough, but also know that this method must not be less.

Nothing, only change the source. This method can be found from the FFmpeg official source repository.

Address: ffmpeg.org/doxygen/1.0/imgconvert_8c-source.html#l00940

found that this method in the previous implementation is in the Avcodec.h declaration is Avpicture method, and then in avpicture.c call libavcodec/imgconvert.c this file, That is to say this method itself belongs to Imgconvert.c, avpicture.c is only indirect call, find ffmpeg3.0 imgconvert.c file, incredibly no this method, but the official code base is this method, is it already removed? Remove does not remove my hair, I just want to be able to use, so simple point to change directly avpicture.c

First add these macro definitions

#define deinterlace_line_inplace deinterlace_line_inplace_c#define deinterlace_line         Deinterlace_line_c#define ff_croptbl ((uint8_t *) NULL)

Then copy these methods from the Web page into the avpicture.c file.

Static void Deinterlace_line_c Static void Deinterlace_line_inplace_c Static void Deinterlace_bottom_field Static void Deinterlace_bottom_field_inplace int Avpicture_deinterlace

Then, in the Avcodec.h header file, add the declaration below the Avpicture_alloc method:

attribute_deprecatedintavpicture_deinterlace (avpicture*dst,constavpicture*src, Enumavpixelformatpix_fmt,intwidth,intheight);

Save and then use the terminal to execute the build-ffmpeg.sh script compile once on the line ... Import the project again Kxmovie will not error, play the video code as follows:

KXMOVIEVIEWCONTROLLER*VC = [kxmovieviewcontrollermovieviewcontrollerwithcontentpath:pathparameters:nil];[ SelfpresentViewController:vcanimated:YEScompletion:nil];

Note: Where path can be a URL that starts with HTTP/RTMP/TRSP

iOS platform FFmpeg-based video streaming technology secrets

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.