FFmpeg Comprehensive Tutorial (ii)--adding filters for live streaming __ffmpeg

Source: Internet
Author: User
In the previous article, explained how to use FFmpeg to implement the camera live, this article will be based on the implementation of a variety of video filters can be selected for the camera live example. This article contains the following contents
1, the basic introduction of Avfilter
2. How to use FFmpeg command-line tools to implement various video filters
3. How to use Libavfilter programming to implement the function of adding various filters in the camera live stream
has a strong comprehensive.
The basic introduction of AvfilterAvfilter is very powerful, can realize various processing of multimedia data, including time line editing, video and audio effects filter Add or signal processing, but also to achieve multi-channel media streaming merge or overlay, its rich degree is breathtaking. This is mainly a video filter as an example. With Avfilter, you can add single or multiple filters for a single video, or you can add different filters for multi-channel video and then merge multiple video into one video at the end, Avfilter defines the following concepts for implementing these features:
Filter: Represents a single filter
Filterpad: Represents a filter input or output port, each filter can have multiple inputs and multiple outputs, only output pad filter is called source, only input pad filter is called sink
Filterlink: If the output pad of one filter is the same as the input pad of another filter, it is assumed that link between the two filter is established
Filterchain: Represents a series of interconnected filters, in addition to source and sink, requires that each filter input and output pad has corresponding output and input pad
Collection of Filtergraph:filterchain
The basics are similar to DirectShow, and are similar to the concepts of nodes in video post-production software. Specifically, take the following command as an example
[In]split[main][tmp]; [Tmp]crop=iw:ih/2,vflip[flip]; [Main] [Flip]overlay=0:h/2[out]

In this command, the input stream [in] is first divided [split] into two streams [main] and [TMP], and then [TMP] flows through the cut [crop] and flip [vflip] two filters to [flip], when we [flip] overlay [overlay] The final output stream [out] is formed at the beginning of [main], and the effect of mirroring is rendered. The following figure clearly shows the above procedure

We can think that each node in the diagram is a filter, each bracket is represented by Filterpad, you can see the split output pad has a called TMP, and the crop input pad also has a TMP, which creates link between the two, Of course input and output represent source and sink, in addition, there are three Filterchain, the first is composed of input and split, the second is composed of crop and vflip, and the third is composed of overlay and output, The whole picture is a filtergraph with three filterchain.
The above figure is artificially drawn, or you can call the Avfilter_graph_dump function in code to automatically draw the Filtergraph, as follows

As you can see, there is a scale filter, which is automatically added by FFmpeg for format conversion.
using Avfilter in the ffmpeg command line toolUsing Avfilter on the command line requires a special syntax, in simple terms, where each filter is separated by a comma, each filter's own property is separated by a colon, the property and filter are connected by an equal sign, and multiple filter Components form a filterchain, Each filterchain is separated by a semicolon. Avfilter in the command-line tool is-VF or-AF or-filter_complex, the first two correspond to single input video filters and audio filters, and the final Filter_complex corresponds to multiple input. In addition to being used in the FFmpeg command-line tool, Avfilter can also be used in Ffplay. Some more detailed syntax reference filter for single double quotes, escape symbols, etc.
Documentation
Let me give you a few examples
1, superimposed watermark
Ffmpeg-i TEST.FLV-VF MOVIE=TEST.JPG[WM]; [In] [Wm]overlay=5:5[out] out.flv
The test.jpg is superimposed as a watermark to the position of the test.flv coordinates (5,5), the effect is as follows

2, Mirror
Ffmpeg-i TEST.FLV-VF crop=iw/2:ih:0:0,split[left][tmp]; [Tmp]hflip[right]; [Left]pad=iw*2[a]; A [Right]overlay=w out.flv
input [in] and output [out] can omit not write, pad used to fill the screen, the effect is as follows

3, adjust the curve
Ffmpeg-i TEST.FLV-VF curves=vintage out.flv
Similar to the curve adjustment in Photoshop, where the vintage is the ffmpeg of the preset, to achieve retro painting wind, you can directly load other Photoshop preset files and on the basis of its adjustment, as follows
Ffmpeg-i test.flv-vf curves=psfile= ' TEST.ACV ': green= ' 0.45/0.53 ' out.flv
The ACV preset file is to enhance the contrast, and then adjust the green display effect, the final effect of the above two commands is as follows


4. Multi-Channel input stitching
Ffmpeg-i test1.mp4-i test2.mp4-i test3.mp4-i Test4.mp4-filter_complex "[0:v]pad=iw*2:ih*2[a]; A [1:v]overlay=w[b]; [b] [2:v]overlay=0:h[c]; C [3:v]overlay=w:h "Out.mp4
As mentioned earlier, when there is more than one input, you need to use Filter_complex, the effect is as follows

With these examples, you can basically understand the syntax you need to follow when using Avfilter in the command line.
to add a filter to a live stream using Libavfilter programmingTo use Libavfilter, first register the related component
Avfilter_register_all ();
First you need to construct a fully available filtergraph, which requires the decoding parameters of the input stream, as shown in the previous article, as follows
Avfiltercontext *buffersink_ctx;//Look at the name as if Avfiltercontext is something very powerful, but in fact as long as it is avfilter an example of the OK Avfiltercontext *
Buffersrc_ctx;
Avfiltergraph *filter_graph; Avfilter *buffersrc=avfilter_get_by_name ("buffer");//filter specific definition, as long as it is libavfilter registered filter, Can be directly through the query filter name method to obtain its specific definition, the so-called definition is the name of the filter, function description, input and output pad, related callback functions, such as Avfilter *buffersink=avfilter_get_by_name ("
Buffersink "); Avfilterinout *outputs = Avfilter_inout_alloc ()//avfilterinout the input and output of the filter at both ends of buffer and Buffersink avfilterinout
*inputs = Avfilter_inout_alloc ();

	Filter_graph = Avfilter_graph_alloc (); /* Buffer video Source:the decoded frames from the decoder would be inserted here. * * snprintf (args, sizeof (args), "video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d", ifmt_ctx-> Streams[0]->codec->width, Ifmt_ctx->streams[0]->codec->height, ifmt_ctx->streams[0]->codec- >PIX_FMT, Ifmt_ctx->streams[0]->time_base.num, Ifmt_ctx->streams[0]->time_base.den, ifmt_ctx-> Streams[0]->codeC->sample_aspect_ratio.num, Ifmt_ctx->streams[0]->codec->sample_aspect_ratio.den); ret = Avfilter_graph_create_filter (&buffersrc_ctx, Buffersrc, "in", Args, NULL, filter_graph); According to the specified filter, 
		Here is the buffer, constructs the corresponding initialization parameter args, the combination can create the example of the filter, and put in Filter_graph if (Ret < 0) {printf ("Cannot create buffer source\n");
	return ret; }/* Buffer video sink:to terminate the filter chain.
	* * ret = Avfilter_graph_create_filter (&buffersink_ctx, Buffersink, "out", NULL, NULL, filter_graph);
		if (Ret < 0) {printf ("Cannot create buffer sink\n");
	return ret; }/* Endpoints for the filter graph.
	* * Outputs->name = Av_strdup ("in")//corresponds to the output Outputs->filter_ctx = Buffersrc_ctx of buffer this filter;
	Outputs->pad_idx = 0;

	Outputs->next = NULL;
	Inputs->name = Av_strdup ("Out"),//corresponds to buffersink this filter input inputs->filter_ctx = Buffersink_ctx;
	Inputs->pad_idx = 0;

	Inputs->next = NULL; if (ret = Avfilter_graph_parse_ptr (filter_graph, FilteR_DESCR, &inputs, &outputs, NULL)) < 0//FILTER_DESCR is a filter command, such as "Overlay=iw:ih", which can parse this command.

	Then automatically completes the connection return RET between each filter in the filtergraph;

	if (ret = Avfilter_graph_config (filter_graph, NULL) < 0)//Check the integrity of the currently constructed filtergraph and the availability return RET;
	Avfilter_inout_free (&inputs); Avfilter_inout_free (&outputs);
This is one of the filtergraph constructs, which is automatically constructed using avfilter_graph_parse_ptr according to the filter command, and of course we can also join each filter one by one by ourselves, as follows, Let's say we've got Buffersrc_ctx, Buffersink_ctx and a filter_ctx.
Connect inputs and outputs
    if (Err >= 0) Err = avfilter_link (buffersrc_ctx, 0, filter_ctx, 0);
    if (Err >= 0) Err = avfilter_link (filter_ctx, 0, buffersink_ctx, 0);
    if (Err < 0) {
        av_log (NULL, Av_log_error, "ERROR connecting filters\n");
        return err;
    }
    Err = Avfilter_graph_config (filter_graph, NULL);
    if (Err < 0) {
        av_log (NULL, Av_log_error, "ERROR Configuring the Filter Graph\n");
        return err;
    }
    return 0;
However, in the case of more filter, or direct use of avfilter_graph_parse_ptr is more convenient
After the construction of a good filtergraph, you can start to use, the use of the process is also very simple, first put a avframe frame into the filtergraph, in the process of avframe from the filtergraph to pull out, Here is an article of the codec core module code for example to look at the implementation process. As you can see, the decoded pframe is pushed into the filter_graph, the processed data is written to the Picref, and he is also a avframe. Note that, here is still to convert Picref to YUV420 frame and then code, on the one hand, because we use camera data here, is RGB format, on the other hand, such as curves such as filter is in the RGB space for processing, The final result is also the corresponding image pixel frame, so need to be converted. The other parts are basically the same as they were.
Start decode and encode int64_t start_time=av_gettime ();
		while (Av_read_frame (Ifmt_ctx, DEC_PKT) >= 0} {if (exit_thread) break;
		Av_log (NULL, Av_log_debug, "going to reencode the frame\n");
		Pframe = Av_frame_alloc ();
			if (!pframe) {ret = Averror (ENOMEM);
		return-1; }//av_packet_rescale_ts (DEC_PKT, ifmt_ctx->streams[dec_pkt->stream_index]->time_base,//ifmt_ctx->
		Streams[dec_pkt->stream_index]->codec->time_base); ret = Avcodec_decode_video2 (Ifmt_ctx->streams[dec_pkt->stream_index]->codec, Pframe, &dec_got_frame,
		DEC_PKT);
			if (Ret < 0) {Av_frame_free (&pframe);
			Av_log (NULL, Av_log_error, "decoding failed\n");
		Break

				} if (dec_got_frame) {#if usefilter pframe->pts = Av_frame_get_best_effort_timestamp (pframe);
				if (filter_change) apply_filters (IFMT_CTX);
				Filter_change = 0; /* Push the decoded frame into the filtergraph/if (Av_buffersrc_add_frame (Buffersrc_ctx, Pframe)< 0) {printf ("Error while feeding the filtergraph\n");
				Break

				} picref = Av_frame_alloc ();  /* Pull filtered pictures from the Filtergraph/while (1) {ret = Av_buffersink_get_frame_flags (Buffersink_ctx,
					Picref, 0);
					if (ret = = Averror (eagain) | | | ret = = averror_eof) break;

					if (Ret < 0) return ret; if (picref) {img_convert_ctx = Sws_getcontext (Picref->width, Picref->height, (Avpixelformat) Picref->format
						, Pcodecctx->width, Pcodecctx->height, av_pix_fmt_yuv420p, sws_bicubic, NULL, NULL, NULL); Sws_scale (Img_convert_ctx, (const uint8_t* const*) Picref->data, picref->linesize, 0, Pcodecctx->height,
						Pframeyuv->data, pframeyuv->linesize);
						Sws_freecontext (IMG_CONVERT_CTX);
						Pframeyuv->width = picref->width;
						Pframeyuv->height = picref->height;
Pframeyuv->format = pix_fmt_yuv420p; #else Sws_scale (Img_convert_ctx, (const uint8_t* const*) PFRAME-&GT;data, pframe->linesize, 0, Pcodecctx->height, Pframeyuv->data, pframeyuv->linesize);
						Pframeyuv->width = pframe->width;
						Pframeyuv->height = pframe->height;
Pframeyuv->format = pix_fmt_yuv420p;
						#endif enc_pkt.data = NULL;
						enc_pkt.size = 0;
						Av_init_packet (&AMP;ENC_PKT);
						ret = Avcodec_encode_video2 (Pcodecctx, &enc_pkt, PFRAMEYUV, &enc_got_frame);
						Av_frame_free (&pframe);
							if (enc_got_frame = = 1) {//printf ("Succeed to encode frame:%5d\tsize:%5d\n", framecnt, enc_pkt.size);
							framecnt++;

							Enc_pkt.stream_index = video_st->index;
							Write PTS avrational time_base = ofmt_ctx->streams[videoindex]->time_base;//{1, 1000};
							Avrational r_framerate1 = ifmt_ctx->streams[videoindex]->r_frame_rate;//{50, 2};
							Avrational time_base_q = {1, av_time_base}; Duration between 2 frames (US) int64_t calc_duration = (double) (Av_time_baSE) * (1/av_q2d (r_framerate1)); Internal timestamp//parameters//enc_pkt.pts = (double) (framecnt*calc_duration) * (double) (av_q2d (TIME_BASE_Q))/(Doubl
							e) (av_q2d (time_base));
							enc_pkt.pts = Av_rescale_q (framecnt*calc_duration, Time_base_q, time_base);
							Enc_pkt.dts = enc_pkt.pts; Enc_pkt.duration = Av_rescale_q (calc_duration, Time_base_q, time_base);
							(double) (calc_duration) * (double) (av_q2d (TIME_BASE_Q))/(double) (av_q2d (time_base));

							Enc_pkt.pos =-1;
							Delay int64_t pts_time = Av_rescale_q (Enc_pkt.dts, Time_base, time_base_q);
							int64_t now_time = Av_gettime ()-start_time;

							if (Pts_time > Now_time) av_usleep (pts_time-now_time);
							ret = Av_interleaved_write_frame (Ofmt_ctx, &AMP;ENC_PKT);
						Av_free_packet (&AMP;ENC_PKT);
					#if usefilter av_frame_unref (picref);
		} #endif} else {av_frame_free (&pframe);
	} av_free_packet (DEC_PKT); }

Here we can also implement a different number of keys to add a different filter function, as follows
You can see that first write some of the filter commands to use, and then in the multi-threaded callback function to monitor the user's key situation, according to different keys using the corresponding filter command to initialize the filter_graph, where "null" is also a filter command, Used to output the input video as is
#if usefilter int filter_change = 1;
const char *filter_descr= "NULL"; const char *filter_mirror = "crop=iw/2:ih:0:0,split[left][tmp];" [Tmp]hflip[right]; \ [Left]pad=iw*2[a]; A
[Right]overlay=w]; const char *filter_watermark = "MOVIE=TEST.JPG[WM];" [In]
[Wm]overlay=5:5[out] ";
const char *filter_negate = "negate[out]";
const char *filter_edge = "edgedetect[out]"; const char *FILTER_SPLIT4 = "scale=iw/2:ih/2[in_tmp];" [In_tmp]split=4[in_1][in_2][in_3][in_4]; [In_1]pad=iw*2:ih*2[a]; A [In_2]overlay=w[b]; [b] [In_3]overlay=0:h[d]; [d]
[In_4]overlay=w:h[out] ";
const char *filter_vintage = "Curves=vintage"; typedef enum{Filter_null =48, Filter_mirror, FILTER_WATERMATK, Filter_negate, Filter_edge, FILTER_SPLIT4, FILTER_

VINTAGE}filters;
Avfiltercontext *buffersink_ctx;
Avfiltercontext *buffersrc_ctx;
Avfiltergraph *filter_graph;
Avfilter *buffersrc;
Avfilter *buffersink;
Avframe* Picref;
	#endif DWORD winapi MyThreadFunction (lpvoid lpparam) {#if usefilter int ch = getchar (); WhiLe (ch!= ' \ n ') {switch (ch) {case Filter_null: {printf ("\nnow using NULL filter\npress other numbers for
				Other filters: ");
				Filter_change = 1;
				FILTER_DESCR = "NULL";
				GetChar ();
				ch = getchar ();
			Break
				Case Filter_mirror: {printf ("\nnow using MIRROR filter\npress The other numbers for the other filters:");
				Filter_change = 1;
				FILTER_DESCR = Filter_mirror;
				GetChar ();
				ch = getchar ();
			Break
				Case FILTER_WATERMATK: {printf ("\nnow using watermark filter\npress The other numbers for the other filters:");
				Filter_change = 1;
				FILTER_DESCR = Filter_watermark;
				GetChar ();
				ch = getchar ();
			Break
				Case Filter_negate: {printf ("\nnow using negate filter\npress the other numbers for the other filters:");
				Filter_change = 1;
				FILTER_DESCR = filter_negate;
				GetChar ();
				ch = getchar ();
			Break Case Filter_edge: {printf ("\nnow using EDGE filter\npress", numbers for other FilteRS: ");
				Filter_change = 1;
				FILTER_DESCR = Filter_edge;
				GetChar ();
				ch = getchar ();
			Break
				Case FILTER_SPLIT4: {printf ("\nnow using SPLIT4 filter\npress The other numbers for the other filters:");
				Filter_change = 1;
				FILTER_DESCR = FILTER_SPLIT4;
				GetChar ();
				ch = getchar ();
			Break
				Case Filter_vintage: {printf ("\nnow using VINTAGE filter\npress The other numbers for the other filters:");
				Filter_change = 1;
				FILTER_DESCR = Filter_vintage;
				GetChar ();
				ch = getchar ();
			Break
				} default: {GetChar ();
				ch = getchar ();
			Break
	} #else while ((GetChar ())!= ' \ n ') {; #endif} exit_thread = 1;
return 0; }

In addition to calling Avfilter at the API level, you can write a filter yourself to achieve the function you want, such as the inverse function used in front, is to use 255 minus the original pixel data values, in the following article will specifically describe how to write a filter of their own.
In addition, for multiple input filter use is also a difficult point, look forward to the exchange of everyone.
This project source code downloads the address

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.