Android Multimedia Framework Summary (10) Stagefright frame audio and video output process

Source: Internet
Author: User

Reprint please the head source link and the tail two-dimensional code together reprint, this article from countercurrent fish yuiop:http://blog.csdn.net/hejjunlin/article/details/52560012

In the last article, the data decoding is introduced into the buffer process, and the Stagefright framework midrange video output is analyzed today:
First look at today's agenda:

    • A picture review data processing process
    • Video Renderer Build process
    • Audio data to the buffer process
    • Audioplayer running Process in Awesomeplayer
    • Audio and video synchronization
    • Audio and video output
    • A picture to see the audio and video output
A picture review data processing process

Video Renderer Build process

At the time of construction, new awesomeevent began to inject awesomeplayer into the onvideoevent.





The above code will most likely call the initrenderer_l function

From the above code: the Essence of Awesomeremoterenderer by Omx::createrenderer will first establish a hardware renderer is: Mvideorenderer =
New Awesomenativewindowrenderer (Mnativewindow, rotationdegrees); if it fails, create a new Awesomelocalrenderer (MNativeWindow, format);
Next look at the following:


And another awesomelocalrenderer at the time of construction new Softwarerenderer (NativeWindow)

Awesomelocalrender is essentially a renderer created by Omx:createrenderer,createrenderer. If video decoder is software component, create a awesomelocalrenderer as Mvideorenderer
Awesomelocalrenderer's constructor will call itself the INIT function, which is exactly the same thing as Omx::createrenderer. It can be understood that the read data is displayed in the renderer.
After the renderer renders the screen, we might want to mediaextractor to separate the audio and video. Who's going to keep them in sync?

Audio data to the buffer process

Whether it's audio or video, Bufferdata, audio or video always has a stream to maintain the timeline. For example: We have seen Double reed, a person to speak, a person to demonstrate the action, the action is not fast, say fast, the action can not be followed. In the middle of the contact lines, there are some natural pauses or signals. In Opencore, a main clock is set, and audio and video are used as the basis for the output. In Stagefright, the output of audio is driven by callback functions, and video is synchronized according to the audio timestamp. Before that, we need to understand the audio-related playback process:
In the Stagefright framework, the audio part is referred to Audioplayer, which is built in Awesomeplayer::p lay_l. Post a previously parsed code: just not looking down at the audioplayer direction


Create Audioplayer

And then look at the startaudioplayer_l function,

Next look at the operation of Audio Maudioplayer->start (true), the above procedure is in the Awesomeplayer, and then into the AudioPlayer.cpp class:


Here is the first introduction to Maudiosink, when Maudiosink is not NULL, Audioplayer will pass it into the constructor.
And Audioplayer in the play operation will be done according to test Maudiosink.
Here Maudiosink is the Audioout object registered from Mediaplayerservice. Specific code in Mediaplayerservice

Called indirectly to Stagefrightplayer->setaudiosink, eventually to Awesomeplayer, as follows:

The Maudiosink member is used when constructing the audioplayer, so when parsing the incoming Maudiosink operation, remember that the actual object is the Audioout object, defined in Mediaplayerservice.
This article is derived from the countercurrent fish yuiop:http://blog.csdn.net/hejjunlin/article/details/52560012

Audioplayer running Process in Awesomeplayer

See below for Audioplayer constructors

The primary is to initialize, and the incoming maudiosink exists in the member Maudiosink
And go back to the start function above.
Summarized as follows:

    • Call Msource->read to start decoding, decode the first frame equivalent to start the decoding loop
    • Get Audio parameters: sample rate, number of channels, and quantization digits (only pcm_16_bit is supported here)
    • Start output: Here if Maudiosink non-empty, then start maudiosink for output, or construct a audiotrack for audio output, here Audiotrack is the lower interface Audioout is Audiotrack package.
    • The main code in the Start method is to invoke Maudiosink to do the work, as follows:

Just introduced Maudiosink is the Audioout object, see the actual implementation (code in MEDIAPLAYERSERVICE.CPP)

First of all, maudiosink->open need to note that there is a function pointer audioplayer::audiosinkcallback in the passed parameter, its main function is to audioout playback of the PCM periodically call this callback function to populate the data, The specific implementation is as follows

The above code summarizes:

    • 1, processing the parameters passed in, the callback function is saved in Mcallback, the cookie represents the Audioplayer object pointer type, followed by the calculation framecount based on the number of sample rate channels.
    • 2, constructs the Audiotrack object, and assigns the value to the T
    • 3. Store the Audiotrack object in the Mtrack member
      When the above process has been completed, As you continue to parse the Audioplayer.start function, a Audiotrack object is instantiated, then information such as frame size, bits, and so on, is called Maudiotrack.start, and the Mediaplayerservice audio output start function is finally reached.

The callback function is called periodically to fetch data from the decoder when the call Mtrack->start,audiotrack is started.
This article is derived from the countercurrent fish yuiop:http://blog.csdn.net/hejjunlin/article/details/52560012

Audio and video synchronization

Back to the question before us: How does audio and video sync? Fill the buffer with Fillbuffer.
The code is as follows:

The above code summarizes: When the callback function callback Audioplayer reads the decoded data, Audioplayer will get two timestamps: Mpositiontimemediaus and Mpositiontimerealus, Mpositiontimemediaus is the time stamp (timestamp) held in the data, and Mpositiontimerealus is the actual time it takes to play the data (based on frame number and sample rate).

The above code summarizes:

    • Mtimesource = Maudioplayer is executed when the Audioplayer is constructed,
      Audioplayer as a reference clock,
    • The member variable mseektimeus in the above code is obtained by the following statement: CHECK (Mvideobuffer->meta_data ()->findint64 (Kkeytime, &timeus));
    • Realtimeoffset = getrealtimeuslocked ()-mpositiontimerealus; When the display is the first frame, indicates the current audio playback time and the first frame of the video difference between the value
    • Where the variable is passed maudioplayer->getmediatimemapping (int64_t *realtime_us, int64_t *mediatime_us) {
      Mutex::autolock Autolock (MLock) got

The difference between the two indicates how much of this packet of PCM data has been played. The video in Stagefright is based on the difference between the two timestamps derived from the Audioplayer.

Audio and video output

Finally, back to the Onvideoevent method at the beginning of this article,

This way the final audio and video data is visible to the surface through the renderer, and you can see the video and hear the sound.
This article is derived from the countercurrent fish yuiop:http://blog.csdn.net/hejjunlin/article/details/52560012

A picture to see the audio and video output

The first time to get blog update reminders, as well as more Android dry, source code Analysis , Welcome to follow my public number, sweep the bottom QR code or long press to identify two-dimensional code, you can pay attention to.

If you feel good, easy to praise, but also to the author's affirmation, can also share this public number to you more people, original not easy

Android Multimedia Framework Summary (10) Stagefright frame audio and video output process

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.