Chromium source--Video playback process analysis (Webmediaplayerimpl parsing)

Source: Internet
Author: User

Reprint Please specify source: http://www.cnblogs.com/fangkm/p/3797278.html

To undertake the previous article. Media playback, you need to specify a source file, HTML5 in the URL format to specify the video source file address, can be an HTTP link, you can also make the local source file (cannot be specified directly, need to use the Blob binary type). Playing network files more than the local file download process, so the following direct analysis of the network file playback process, the local file playback process is clear. First analyzes the network video resources loading process, the related structure diagram is as follows:

The Webmediaplayerimpl class has a member Buffereddatasource to manage the load logic for URL network resources.

Buffereddatasource Resource load logic is mainly done by Bufferedresourceloader,

The Bufferedresourceloader class maintains a derived class Associatedurlloader object for a Weburlloader interface, and the Associatedurlloader class does not have a real and webkit_ Glue layer of Weburlloaderimpl like the implementation of Weburlloader interface, Instead, the Documentthreadableloader class ultimately relies on the implementation of Weburlloaderimpl to send a URL request to the main process Weburlloaderimpl (Weburlloaderimpl process see:/HTTP) www.cnblogs.com/fangkm/p/3784660.html).

The Associatedurlloader object is associated with the frame object, and the request is canceled when the Stoploading method of the webframe is called.

The Bufferedresourceloader internally maintains an extensible memory buffer to hold the requested video data.

Analysis Here I always did not find the pause buffer mechanism, and did not find the buffer to disk files, if the video file is too large, all the product put in memory, the resource consumption is definitely a very bad program experience. Of course, my chromium code here is also a bit old, maybe the new version has been improved.

The video data is ready and the next task is to parse the audio and video data. Before analyzing this part, it is easy to popularize the related concepts of audio and video. General video files have video streaming and audio streaming two parts, different video format audio and video packaging format certainly not the same. The process of combining audio streams and video streams into files is called muxer, whereas the process of separating audio streams and video streams from media files is called Demuxer. Playing video files will need to separate the audio stream and video stream from the file stream, decode it separately, decode the video frame can be directly rendered, audio frames can be sent to the audio output device buffer to play, of course, video rendering and audio playback time stamp must control synchronization.

The logical structure of the demuxer in Webmediaplayerimpl is as follows:

Webmediaplayerimpl creates different Demuxer objects based on the different resources.

If the video source is binary data sent over JavaScript, create a Chunkdemuxer object to separate the audio stream from the video stream;

If the video source is a network source specified through a URL, create a Ffmpegdemuxer object that relies on the Buffereddatasource object to access the media stream data that is loaded over the network.

The implementation of Chunkdemuxer and ffmpegdemuxer is not a table, just need to understand their role is to separate the media stream video stream and audio stream. Analyze the entire playback process first.

Webmediaplayerimpl class has a pipeline object to be responsible for the video playback process, pipeline itself is the meaning of the pipeline, is suitable for video playback of a series of processes. Pipeline internally uses state machines to maintain the logic of various stages in playback. Pipeline call Audiorendererimpl initialization, the GetStream method of Demuxer is called, which specifies to get the audio stream data incoming Audiorendererimpl object; Similarly, when the call Videorendererbase is initialized, it takes the video stream data to pass in. The read operation of the audio and video stream is abstracted by the Demuxerstream interface.

The following analysis of the videorendererbase process, videorendererbase the name of a bit strange, with a render word, do is the video stream decoding logic, the actual drawing operation or throw to Webmediaplayerimpl class, See Webmediaplayerimpl's Paint method for details. First look at the structure:

Videorendererbase maintains a videodecoder list, the main internal logic is given to videoframestream processing, the main functions of Videoframestream include the selection of decoders, Decoding the video stream from Demuxerstream, the decoded result is a video frame structure videoframe, which encapsulates YUV data, which can be directly or converted to RGB for rendering operations.

Here's a quick introduction to the video decoder creation and selection logic:

In the Webmediaplayerimpl class, a list of video decoders has been created, sequentially:

1. If the GPU supports video decoding, create a Gpuvideodecoder object

2. Create a Vpxvideodecoder object

3. Create a Ffmpegvideodecoder object

After the decoder list is created, the Videorendererbase object is passed in, and the selection logic is ultimately managed by Videoframestream:

1. If there is an encryption option in the video configuration information, create the Decryptingvideodecoder as a decoder

2. If no encryption option is available, select first as the decoder from the list of incoming decoders.

3. If the initialize of the selected decoder is called invalid (the decoder does not support decoding of that format), the next decoder in the list is selected sequentially.

Audiorendererimpl is similar in structure to Videorendererbase, which is slightly more complex than video rendering in audio rendering and requires audio data to be output to a sound card device for playback. Related structure diagram:

So far, the entire process of Web player is almost clear, of course, there are many details, such as media source flow corresponding to the chunkdemuxer of the splitter, ffmpegdemuxer internal how to use the FFmpeg, a variety of decoder implementation, etc. Without development experience, it is really time-consuming to study these details, and interested children's shoes can be researched on their own.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.