Maintenance of the decoder data stream in Stagefright of Android4.2.2

Source: Internet
Author: User

 

Android source code Version: 4.2.2; hardware platform A31

 

Frontier:

In the previous blog, we mentioned the control flow related to stagefright, analyzed the creation of MediaExtractor, AwesomePlayer, StagefrightPlayer, and OMXCodec In the android architecture, and the creation of OMXNodinstance at the underlying layer. This article analyzes the architecture of the underlying plug-in library and decoder component of OMX and how to create our own OMX Plugin.

Another key to analyzing the source code architecture is data stream analysis. From here, we will analyze the codec cache in stagefright:

1.

Return to the source code of the OMXCodec creation process:

Status_t AwesomePlayer: initVideoDecoder (uint32_t flags ){....... mVideoSource = OMXCodec: Create (mClient. interface (), mVideoTrack-> getFormat (), // extract the video stream format, mClient: BpOMX; mVideoTrack-> getFormat () false, // createEncoder, do not create encoder false mVideoTrack, NULL, flags, USE_SURFACE_ALLOC? MNativeWindow: NULL); // create a decoder mVideoSource if (mVideoSource! = NULL) {int64_t durationUs; if (mVideoTrack-> getFormat ()-> findInt64 (kKeyDuration, & durationUs) {Mutex: Autolock autoLock (mMiscStateLock ); if (mDurationUs <0 | durationUs> mDurationUs) {mDurationUs = durationUs ;}} status_t err = mVideoSource-> start (); // start the decoder OMXCodec, complete the init initialization operation of the decoder .............}

In the post on the OMX plug-in and Codec component of A31 in the Stagefright multimedia architecture under Android4.2.2, we have made a detailed analysis of OMXCodec: create. Here we will focus on mVideoSource-> start functions, processing of OMXCodec: start:

Status_t OMXCodec: start (MetaData * meta) {Mutex: Autolock autoLock (mLock); ...... return init (); // initialize}

The process of calling init () here will apply for the buffer and lay the foundation for subsequent stream operations:

Status_t OMXCodec: init () {// mLock is held ...... err = allocateBuffers (); // if (err! = (Status_t) OK) {return err;} if (mQuirks & Records) {err = mOMX-> sendCommand (mNode, OMX_CommandStateSet, OMX_StateIdle); CHECK_EQ (err, (status_t) OK); setState (LOADED_TO_IDLE );}............}

Let's look at the implementation of allocateBuffers.

 

2. Focus on the Implementation of allocateBuffersOnPort

Status_t OMXCodec: allocateBuffers () {status_t err = allocateBuffersOnPort (kPortIndexInput); // allocate if (err! = OK) {return err;} return allocateBuffersOnPort (kPortIndexOutput); // output cache input port allocation}

Buffer application and allocation will be performed for the input and output ports respectively. For decoder, the input port is required to store the data source to be decoded, And the decoded data source must be stored to the output port, this is also in line with the hardware implementation logic. Take the input cache distribution as an example to show the analysis:

Status_t OMXCodec: allocateBuffersOnPort (OMX_U32 portIndex ){....... OMX_PARAM_PORTDEFINITIONTYPE def; InitOMXParams (& def); def. nPortIndex = portIndex; // input port err = mOMX-> getParameter (mNode, OMX_IndexParamPortDefinition, & def, sizeof (def )); // obtain the input port parameters to def .......... err = mOMX-> allocateBuffer (mNode, portIndex, def. nBufferSize, & buffer, & info. mData );........ info. mBuffer = buffer; // obtain the corresponding buffer_id, which stores information about the underlying buffer. mStatus = OWNED_BY_US; info. mMem = mem; info. mMediaBuffer = NULL ;........... mPortBuffers [portIndex]. push (info); // restore the current buffer to mPortBuffers [2]

The above processes are mainly divided:

Step 1: Get familiar with the current parameters of the underlying decoder component. Generally, these parameters are the initial configuration completed when the OMX_Codec is created, as mentioned in the previous blog.

Step 2: Process allocateBuffer. The call of this function is finally completed by the underlying OMX component. The implementation will be integrated into the processing stream of the underlying OMX codec component of A31 for analysis.

Step 3: complete the allocated buffer information info and maintain it on the port mPortBuffers [0.

The above process completes the input and output Buffer allocation, laying the foundation for subsequent decoding operations.

 

3. mediaplay starts the player.

Use the start API to call MediaplayerService: Client, and then stagefrightplayer and AwesomePlayer. The occurrence of videoevent that triggers play.

void AwesomePlayer::postVideoEvent_l(int64_t delayUs) {    ATRACE_CALL();    if (mVideoEventPending) {        return;    }    mVideoEventPending = true;    mQueue.postEventWithDelay(mVideoEvent, delayUs < 0 ? 10000 : delayUs);}

According to the analysis in the previous blog, the corresponding processing function of this event is AwesomePlayer: onVideoEvent (). This part of the code is large, and the processing of core content read is extracted for analysis:

Status_t err = mVideoSource-> read (& mVideoBuffer, & options); // The actual OMX_CODEC: read of cyclic read data, read to mVideoBuffer

The core of read is to obtain video data that can be used for render. This indicates that the read function mainly reads metadata from the video source and calls the decoder to complete decoding to generate visible data.

 

4. Implementation of the read Function

As you can imagine, the read function should be a complicated process. Let's start with the read function of OMX_Codec:

Status_t OMXCodec: read (MediaBuffer ** buffer, const ReadOptions * options) {status_t err = OK; * buffer = NULL; Mutex: Autolock autoLock (mLock); drainInputBuffers (); // buffer, fill in the data source if (mState = EXECUTING) {// Otherwise mState = RECONFIGURING and this code will trigger // after the output port is reenabled. fillOutputBuffers ();}}...........}

The core logic of read is summarized as drainInputBuffers () and fillOutputBuffers (), which are analyzed in sequence.

 

5. drainInputBuffers () reads the video data source to be decoded to the Inport of the decoder.

The complicated processing code is posted here, which is mainly divided into the following three parts for analysis:

(1)

bool OMXCodec::drainInputBuffer(BufferInfo *info) {
If (mCodecSpecificDataIndex <mCodecSpecificData. size () {CHECK (! (MFlags & kUseSecureInputBuffers); const CodecSpecificData * specific = mCodecSpecificData [mCodecSpecificDataIndex]; size_t size = specific-> mSize; if (! Strcasecmp (MEDIA_MIMETYPE_VIDEO_AVC, mMIME )&&! (MQuirks & kWantsNALFragments) {static const uint8_t kNALStartCode [4] = {0x00, 0x00, 0x00, 0x01 }; CHECK (info-> mSize> = specific-> mSize + 4); size + = 4; memcpy (info-> mData, kNALStartCode, 4); memcpy (uint8_t *) info-> mData + 4, specific-> mData, specific-> mSize);} else {CHECK (info-> mSize> = specific-> mSize ); memcpy (info-> mData, specific-> mData, specific-> mSize); // copy the previous data field} mNoMoreOutputData = false; CODEC_LOGV (calling emptyBuffer with codec specific data ); status_t err = mOMX-> emptyBuffer (mNode, info-> mBuffer, 0, size, buffers | OMX_BUFFERFLAG_CODECCONFIG, 0); // process buffer CHECK_EQ (err, (status_t) OK ); info-> mStatus = OWNED_BY_COMPONENT; ++ mCodecSpecificDataIndex; return true ;}
...... (1)

This part mainly extracts some decoder fields and fills them in the storage space of info-> mData. This part is mainly based on the video source format, such as mp4. When you create an OXMCodec configureCodec, The mCodecSpecificData field is added. You should add some special fields required for decoding. Whether to view the format of the video source. After this field is obtained, the data of the video source is read.

 

(2)

For (;) {MediaBuffer * srcBuffer; if (mSeekTimeUs> = 0) {if (mLeftOverBuffer) {mLeftOverBuffer-> release (); mLeftOverBuffer = NULL;} MediaSource :: readOptions options; options. setSeekTo (mSeekTimeUs, mSeekMode); mSeekTimeUs =-1; mSeekMode = ReadOptions: SEEK_CLOSEST_SYNC; mBufferFilled. signal (); err = mSource-> read (& srcBuffer, & options); // read the real data in the video source. Here is the read if (err = OK) of MPEG4Source) {int64_t ta RgetTimeUs; if (srcBuffer-> meta_data ()-> findInt64 (kKeyTargetTime, & targetTimeUs) & targetTimeUs> = 0) {CODEC_LOGV (targetTimeUs = % lld us, targetTimeUs ); mTargetTimeUs = targetTimeUs;} else {mTargetTimeUs =-1 ;}} else if (mLeftOverBuffer) {srcBuffer = mLeftOverBuffer; mLeftOverBuffer = NULL; err = OK ;} else {err = mSource-> read (& srcBuffer) ;}if (err! = OK) {signalEOS = true; mFinalStatus = err; mSignalledEOS = true; mBufferFilled. signal (); break;} if (mFlags & kUseSecureInputBuffers) {info = findInputBufferByDataPointer (srcBuffer-> data (); CHECK (info! = NULL);} size_t remainingBytes = info-> mSize-offset; // The remaining buffer space that can store video data if (srcBuffer-> range_length ()> remainingBytes) {// The currently read data has reached the decoded data volume if (offset = 0) {CODEC_LOGE (Codec's input buffers are too small to accomodate buffer read from source (info-> mSize = % d, srcLength = % d), info-> mSize, srcBuffer-> range_length (); srcBuffer-> release (); srcBuffer = NULL; setState (ERROR); return false;} MLeftOverBuffer = srcBuffer; // record unread buffer to break;} bool releaseBuffer = true; if (mFlags & kStoreMetaDataInVideoBuffers) {releaseBuffer = false; info-> mMediaBuffer = srcBuffer;} if (mFlags & kusecureinputbuffers) {// Data in info is already provided at this time. releaseBuffer = false; CHECK (info-> mMediaBuffer = NULL); info-> mMediaBuffer = srcBuffer;} else {CHECK (srcBuffer-> data ()! = NULL); memcpy (uint8_t *) info-> mData + offset, (const uint8_t *) srcBuffer-> data () + srcBuffer-> range_offset (), srcBuffer-> range_length (); // copy data from the data source to the input cache, data capacity srcBuffer-> range_length ()} int64_t lastBufferTimeUs; CHECK (srcBuffer-> meta_data () -> findInt64 (kKeyTime, & lastBufferTimeUs); CHECK (lastBufferTimeUs> = 0); if (mIsEncoder & mIsVideo) {mDecodingTimeList. push_back (lastBufferTimeUs);} if (o Ffset = 0) {timestampUs = lastBufferTimeUs;} offset + = srcBuffer-> range_length (); // increase the offset if (! Strcasecmp (MEDIA_MIMETYPE_AUDIO_VORBIS, mMIME) {CHECK (! (MQuirks & kSupportsMultipleFramesPerInputBuffer); CHECK_GE (info-> mSize, offset + sizeof (int32_t); int32_t numPageSamples; if (! SrcBuffer-> meta_data ()-> findInt32 (latency, & numPageSamples) {numPageSamples =-1;} memcpy (uint8_t *) info-> mData + offset, & numPageSamples, sizeof (numPageSamples); offset + = sizeof (numPageSamples);} if (releaseBuffer) {srcBuffer-> release (); srcBuffer = NULL;} ++ n; if (! (MQuirks & kSupportsMultipleFramesPerInputBuffer) {break;} int64_t coalescedDurationUs = lastBufferTimeUs-timestampUs; if (coalescedDurationUs> 25366ll) {// Don't coalesce more than 250 ms worth of encoded data at once. break ;}}...........

This part is the key to extracting video source data, mainly through err = mSource-> read (& srcBuffer, & options). mSource is passed in when the decoder is created, it is actually a parser corresponding to the video source format MediaExtractor. For example, when creating the MP4 parser MPEG4Extractor, you can create a new MPEG4Source. Therefore, the read member function of MPEG4Source is called, and the original video stream to be decoded is actually maintained.

We can see that after reading the function, the data stream to be decoded will be read into the underlying buffer space in order in the for loop, only when the original data segment to be read is smaller than the buffer space of the underlying input port srcBuffer-> range_length ()> remainingBytes, you can continue to read the data segment; otherwise, after directly breaking the AK, proceed to the next step. Or if a piece of data to be decoded is larger than ms, it jumps out directly.

This processing reflects the processing efficiency. The original video data is stored in the underlying input space of info-> mData.

 

(3)

    err = mOMX->emptyBuffer(            mNode, info->mBuffer, 0, offset,            flags, timestampUs);

Trigger the underlying decoder component for processing. This part will be analyzed in the subsequent operations on the underlying codec API of A31.

6. fillOutputBuffers fills the output buffer port to implement the decoding process:

Void OMXCodec: fillOutputBuffers () {CHECK_EQ (int) mState, (int) EXECUTING); ...... Vector
 
  
* Buffers = & mPortBuffers [kPortIndexOutput]; output port for (size_t I = 0; I <buffers-> size (); ++ I) {BufferInfo * info = & buffers-> editItemAt (I); if (info-> mStatus = OWNED_BY_US) {fillOutputBuffer (& buffers-> editItemAt (I ));}}}
 
void OMXCodec::fillOutputBuffer(BufferInfo *info) {    CHECK_EQ((int)info->mStatus, (int)OWNED_BY_US);    if (mNoMoreOutputData) {        CODEC_LOGV(There is no more output data available, not              calling fillOutputBuffer);        return;    }    CODEC_LOGV(Calling fillBuffer on buffer %p, info->mBuffer);    status_t err = mOMX->fillBuffer(mNode, info->mBuffer);    if (err != OK) {        CODEC_LOGE(fillBuffer failed w/ error 0x%08x, err);        setState(ERROR);        return;    }    info->mStatus = OWNED_BY_COMPONENT;}

From the code above, the implementation of fillOutputBuffer is much simpler than that of drainInputBuffers. But the same is true. Both of them ultimately give control to the underlying decoder.

 

7. Wait for the decoded data to be fill into the outbuffer, And the OMXCodecObserver completes the callback processing.

The content waiting for decoding is implemented in the read function using the following functions:

While (mState! = ERROR &&! MNoMoreOutputData & mFilledBuffers. empty () {if (err = waitForBufferFilled_l ())! = OK) {// enter the wait buffer to be filled with return err ;}}

The above indicates that, as long as the mFilledBuffers is empty, it enters the waiting for pthread_cond_timedwait to be filled. This thread is awakened through the underlying component callback. The registration of the callback function is done by the underlying decoder Node. The actual final callback is done by OMXCodecObserver:

Struct OMXCodecObserver: public BnOMXObserver {OMXCodecObserver () {} void setCodec (const sp
 
  
& Target) {mTarget = target;} // from IOMXObserver virtual void onMessage (const omx_message & msg) {sp
  
   
Codec = mTarget. promote (); if (codec. get ()! = NULL) {Mutex: Autolock autoLock (codec-> mLock); codec-> on_message (msg); // The on_message of OMX_Codec processes codec. clear ();}}
  
 

In the end, we can see that the message is processed by OMX_Codec-> on_message. The content mainly includes EMPTY_BUFFER_DONE and FILL_BUFFER_DONE, and analyzes the message callback after FILL_BUFFER_DONE:

Void OMXCodec: on_message (const omx_message & msg) {if (mState = ERROR) {/** only drop EVENT messages, EBD and FBD are still * processed for bookkeeping purposes */if (msg. type = omx_message: EVENT) {ALOGW (Dropping omx event message-we're in ERROR state .); return ;}} switch (msg. type) {case omx_message: FILL_BUFFER_DONE: // The underlying callback informs the current .............. mFilledBuffers. push_back (I); // The current output buffer information is maintained in mFilledBuffers mBufferFilled. signal (); // sends information for rendering

We can see that the read thread is awakened here.

8. Extract an available decoded data frame

Size_t index = * mFilledBuffers. begin (); mFilledBuffers. erase (mFilledBuffers. begin (); BufferInfo * info = & mPortBuffers [kPortIndexOutput]. editItemAt (index); // obtain the decoded video source CHECK_EQ (int) info-> mStatus, (int) OWNED_BY_US); info-> mStatus = OWNED_BY_CLIENT; info-> mMediaBuffer-> add_ref (); // if (mSkipCutBuffer! = NULL) {mSkipCutBuffer-> submit (info-> mMediaBuffer);} * buffer = info-> mMediaBuffer;

Obtain the buffer after the thread wakes up. Obtain the Bufferinfo corresponding to the output port from here and return it to the read function as the final BufferInfo information.

9

After processing steps 5, 6, 7, and 8, read finally returns the mVideoBuffer that can be used for display. The next step is how to deliver the video. The following code creates a Renderer mVideoRenderer to display the video source after decoding:

          

If (mNativeWindow! = NULL) & (mVideoRendererIsPreview | mVideoRenderer = NULL) {// create the Renderer mVideoRendererIsPreview = false for the first time;

InitRenderer_l (); // initialize the Renderer and create an AwesomeLocalRenderer}

If (mVideoRenderer! = NULL) {mSinceLastDropped ++; mVideoRenderer-> render (mVideoBuffer); // start rendering, that is, display the current buffer if (! MVideoRenderingStarted) {mVideoRenderingStarted = true; policylistener_l (MEDIA_INFO, MEDIA_INFO_RENDERING_START );}

}

Void AwesomePlayer: initRenderer_l () {ATRACE_CALL (); if (mNativeWindow = NULL) {return;} sp
 
  
Meta = mVideoSource-> getFormat (); int32_t format; const char * component; int32_t decodedWidth, decodedHeight; CHECK (meta-> findInt32 (kKeyColorFormat, & format )); CHECK (meta-> findCString (kKeyDecoderComponent, & component); CHECK (meta-> findInt32 (kKeyWidth, & decodedWidth); CHECK (meta-> findInt32 (kKeyHeight, & decodedHeight); int32_t rotationDegrees; if (! MVideoTrack-> getFormat ()-> findInt32 (kKeyRotation, & rotationDegrees) {rotationDegrees = 0;} mVideoRenderer. clear (); // Must ensure that mVideoRenderer's destructor is actually executed // before creating a new one. IPCThreadState: self ()-> flushCommands (); // Even if set scaling mode fails, we will continue anyway setVideoScalingMode_l (mVideoScalingMode); if (USE_SURFACE_ALLOC &&! Strncmp (component, OMX ., 4) & strncmp (component, OMX. google ., 11) & strcmp (component, OMX. nvidia. mpeg2v. decode) {// use the Hardware Renderer, except for the decoder // Hardware decoders avoid the CPU color conversion by decoding // directly to ANativeBuffers, so we must use a renderer that // just pushes those buffers to the ANativeWindow. mVideoRenderer = new awesomenativeappswrenderer (mNativeWindow, rotationDegrees ); // generally, hardware rendering mechanism is used} else {// Other decoders are instantiated locally and as a consequence // allocate their buffers in local address space. this renderer // then performs a color conversion and copy to get the data // into the ANativeBuffer. mVideoRenderer = new AwesomeLocalRenderer (mNativeWindow, meta );}}
 

Here we can see that there are two Renderer creation branches. OMX and OMX. google indicate that the underlying Decoder uses soft decoding, and the so-called local Renderer is actually a soft Renderer. Here we use the awesomenative?wrenderer Renderer. Its structure is described as follows:

Struct detail: public detail {detail (const sp & nativeWindow, int32_t rotationDegrees): mNativeWindow (nativeWindow) {applyRotation (rotationDegrees);} virtual void render (MediaBuffer * buffer) {ATRACE_CALL (); int64_t timeUs; CHECK (buffer-> meta_data ()-> findInt64 (kKeyTime, & timeUs); native_window_set_buffers_timestamp (mNativeWindow. get (), timeUs * 1000); status_t err = mNativeWindow-> queueBuffer (mNativeWindow. get (), buffer-> graphicBuffer (). get (),-1); // directly use queuebuffer for rendering and display if (err! = 0) {ALOGE (queueBuffer failed with error % s (% d), strerror (-err),-err); return;} sp
  
   
MetaData = buffer-> meta_data (); metaData-> setInt32 (kKeyRendered, 1 );}
  

Not very complex, but it implements the AwesomeRenderer rendering interface render. Finally, this function is called to display the buffer. Here we can see the familiar queueBuffer. You can check my blog post Android4.2.2 SurfaceFlinger's graphic rendering queueBuffer implementation and VSYNC presence. This is through the local window of the application mNativeWindow (because the player videoview inherits sufaceview, the surfaceview class creates a local surface that inherits the local window class) submits the current buffer to the SurfaceFlinger service for display. The specific content is not displayed.

So far, we have completed operations on the codec Data Stream under stagefright. The complexity of the program is mainly reflected in emptybuffer and fillbuffer. Of course, due to limited capabilities, we have not analyzed many details in detail. I also hope you can communicate and learn more.

 



 

 

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.