FFmpeg getting started -- Document 8: Software Scaling

Source: Internet
Author: User

Tutorial 8: Software Scaling

Software scaling library libswscale

Recently, FFMPEG has added a new interface: libswscale to process image scaling.

But previously we used img_convert to Convert RGB to yuv12. Now we use the new interface. The new interface is more standard and fast, and I believe there is MMX optimization code in it. In other words, it is a better way to scale.

The basic function used for scaling is sws_scale. But at the beginning, we must establish a concept of swscontext. This will let us perform the desired conversion and pass it to the sws_scale function. It is similar to the rule expression Regexp compiled in the SQL preparation stage or in Python. To prepare this context, we use the sws_getcontext function, which requires the width and height of our source, the width and height we want, the format of the source and the format we want to convert, there are also some other parameters and flags. Then we use the sws_scale function like img_convert. The only difference is that we pass the swscontext:

# Include <FFMPEG/swscale. h> // include the header!

Int queue_picture (videostate * is, avframe * pframe, double PTS ){

Static struct swscontext * img_convert_ctx;

...

If (VP-> BMP ){

Sdl_lockyuvoverlay (VP-> BMP );

Dst_pix_fmt = pix_fmt_yuv420p;

PICT. Data [0] = VP-> BMP-> pixels [0];

PICT. Data [1] = VP-> BMP-> pixels [2];

PICT. Data [2] = VP-> BMP-> pixels [1];

PICT. linesize [0] = VP-> BMP-> pitches [0];

PICT. linesize [1] = VP-> BMP-> pitches [2];

PICT. linesize [2] = VP-> BMP-> pitches [1];

// Convert the image into YUV format that SDL uses

If (img_convert_ctx = NULL ){

Int W = is-> video_st-> codec-> width;

Int H = is-> video_st-> codec-> height;

Img_convert_ctx = sws_getcontext (W, H,

Is-> video_st-> codec-> pix_fmt,

W, H, dst_pix_fmt, sws_bicubic,

Null, null, null );

If (img_convert_ctx = NULL ){

Fprintf (stderr, "Cannot initialize the conversion context! \ N ");

Exit (1 );

}

}

Sws_scale (img_convert_ctx, pframe-> data,

Pframe-> linesize, 0,

Is-> video_st-> codec-> height,

PICT. Data, Pict. linesize );

We place the new zooming in the appropriate position. Hopefully this will let you know what libswscale can do.

That's it! We're done! Compile our player:

Gcc-O tutorial08 tutorial08.c-lavutil-lavformat-lavcodec-LZ-LM 'sdl-config -- cflags -- libs'

Enjoy the movie player with less than 1000 lines written in C.

Of course, there are still many things to do.

What should I do now?

We already have a working player, but it is definitely not good enough. We have done a lot, but there are still a lot of performance to be added:

·Error Handling. The error handling in our code is infinite, and it is better to process more.

·Pause. We cannot pause movies. This is a useful feature. We can use an internal pause variable in a large struct and set it when the user pauses. Then our audio, video, and decoding threads no longer output anything after detecting it. We also use av_read_play to support the network. This is easy to explain, but you cannot calculate it clearly, so you can use it as a homework if you want to try it. Note: See ffplay. C.

·Supports Video hardware features. For a reference example, refer to the related section of frame grabbing in Martin's old guidance. Http://www.inb.uni-luebeck.de /~ Boehme/libavcodec_update.html

·Jump by byte. If you can calculate the jump position in bytes rather than seconds, the positioning will be more accurate for video files with Discontinuous timestamps like the vob file.

·Drop Frame. If the video lags far behind, we should discard the next frame instead of setting a short refresh time.

·Supported networks. Currently, movie players cannot play online streaming media.

·Supports raw video streams like YUV files. If our player supports the time base and size, we should add some parameters for corresponding settings because we cannot guess the time base and size.

·Full Screen.

·Multiple ParametersFor example, for different image formats, see the command switch in ffplay. C.

·Other thingsFor example, the audio buffer in the struct should be aligned.

If you want to know more about FFMPEG, we already include part of it. The next step is to learn how to encode multimedia. A good start point is the output_example.c file in FFMPEG. I can write another guide for it, but I don't have enough time to do it.

Well, I hope this guidance will be helpful and interesting. If you have any suggestions, questions, complaints and compliments, please send an email to dranger@gmail.com.

Source: <[Personal translation] FFMPEG documentation 8_report to China independently _ Sina Blog>

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.