1. awesomeevent is a class for synchronizing corresponding events. It works similar to logoff and handler at the framework layer. Player has some asynchronous operations such as parsing files, which are time-consuming, perform asynchronous operations and then perform callback for a better user experience
Struct awesomeevent: Public timedeventqueue: Event
Inherited from timedeventqueue: event. For the internal class of C ++, refer to the previous blog. timedeventqueue starts a thread for the post operation of the event.
2, awesomeremoterenderer
Awesomelocalrenderer
These two classes post data to the surface, and remote is said to be because of the use of OMX node, and there is no further research, and localrenderer will call different methods based on whether there is hardware acceleration
3. the constructor initializes the event.
The general process of mediaplayer is
Setdatasource
Prepareasync
Start
Now only process analysis is performed for filesource.
Setdatasource generally sets the file URI.
Muri = URI; set the URI.
// The actual work will be done during preparation in the call
//: Finishsetperformance_l to avoid blocking the calling thread in
// Setdatasource for any significant time.
No other operations will be performed now, because it will consume too much time and may lead to upper-layer ANR. This operation is a synchronous operation.
The most important thing is in prepareasync. Check this function.
If (! Mqueuestarted ){
Mqueue. Start ();
Mqueuestarted = true;
} // Enable eventqueue to prepare for future event post. A thread will be created at this time.
Masyncprepareevent = new awesomeevent (
This, & awesomeplayer: onprepareasyncevent );
Mqueue. postevent (masyncprepareevent );
// Postevent
To onprepareasyncevent ()
If (muri. Size ()> 0 ){
Status_t err = finishsetperformance_l ();
If (Err! = OK ){
Abortprepare (ERR );
Return;
}
}
Finishsetdatasource_l (); // This is the function mentioned above. This function will create different extractor according to the source type.
For filesource, The datasource = datasource: createfromuri (muri. String (), & muriheaders); // The uriheaders parameter is null.
Next, jump to source = new filesource (URI + 7); Create filesource, it will register different extractor, Sniffer different mimetype, operate filesource
Return to finishsetdatasource_l (),
Sp <mediaextractor> extractor = mediaextractor: Create (datasource, mime );
At this time, datasource will sniffer different extractor, so as to establish different extractor according to the MIME type
The last step is the setdatasource operation.
At this time, the audio track and Viedo track will be parsed to prepare for the following Codec
If (mvideotrack! = NULL & mvideosource = NULL ){
// Use treaty timestamps if playing
// RTSP Streaming with video only content
// (No audio to drive the clock for media time)
Uint32_t flags = 0;
If (mrtspcontroller! = NULL & maudiotrack = NULL ){
Flags | = omxcodec: kusenpttimestamp;
}
Status_t err = initvideodecoder (flags );
If (Err! = OK ){
Abortprepare (ERR );
Return;
}
}
If (maudiotrack! = NULL & maudiosource = NULL ){
Status_t err = initaudiodecoder ();
If (Err! = OK ){
Abortprepare (ERR );
Return;
}
}
This is the key part of codec.
From the above, both audiotrack and videotrack will have codec initialized at this time.
Initvideodecoder (flags );
Initaudiodecoder ();
{
Mvideosource = omxcodec: Create (
Mclient. Interface (), mvideotrack-> getformat (),
False, // createencoder
Mvideotrack,
Null, flags );
Omxcodec is a key class. This is the stagefright code written a few days ago. I will continue to write that code. If it is not mentioned here, a videosource will be returned, this is the decoded Data Based on videotrack.
If (mvideosource! = NULL ){
Int64_t durationus;
If (mvideotrack-> getformat ()-> findint64 (kkeyduration, & durationus )){
Mutex: autolock (mmiscstatelock );
If (mdurationus <0 | durationus> mdurationus ){
Mdurationus = durationus;
}
}
Check (mvideotrack-> getformat ()-> findint32 (kkeywidth, & mvideowidth ));
Check (mvideotrack-> getformat ()-> findint32 (kkeyheight, & mvideoheight ));
Status_t err = mvideosource-> Start (); // at this time, both videosource and viedotrack will be ready to apply for Buffer
If (Err! = OK ){
Mvideosource. Clear ();
Return err;
}
}
Return mvideosource! = NULL? OK: unknown_error;
}
After modification, the status will be changed and the upper-layer prepareasync callback will be implemented.
The data has been prepared,
Play_l ()
If (mvideosource! = NULL )&&(! Mvideobuffer ))
{
// Changes to fix audio starts playing before video.
// First video frame is returned late as it is referenced to decode subsequent P and B frames.
// For higher resolutions (e.g. 1080 p) This returning time is significant.
// We Need To trigger Video Decoder earlier than audio so that video catch up with audio in time.
Mediasource: readoptions options;
If (mseeking ){
Logv ("seeking to % LLD us (%. 2f SECs)", mseektimeus, mseektimeus/1e6 );
Options. setseekto (
Mseektimeus, mediasource: readoptions: seek_closest_sync );
}
For (;;){
Status_t err = mvideosource-> Read (& mvideobuffer, & options );
Options. clearseekto ();
If (Err! = OK ){
Check_eq (mvideobuffer, null );
If (ERR = info_format_changed ){
Logv ("videosource signalled format change .");
If (mvideorenderer! = NULL ){
Mvideorendererispreview = false;
Initrenderer_l ();
}
Continue;
}
Break;
}
Break;
}
}
If (mvideosource! = NULL ){
// Kick off video playback
Postvideoevent_l ();
}
Render the mvideobuffer to the surface.
However, audio calls audioplayer.
Maudioplayer = new audioplayer (maudiosink, this );
Maudioplayer-> setsource (maudiosource );
// We 've already started the mediasource in order to enable
// The prefetcher to read its data.
Status_t err = maudioplayer-> Start (
True/* sourcealreadystarted */);
There is also the issue of Audio and Video Synchronization and seek, and we will write it again next time.
PS ----- this article is for the code of 2.3. After 4.0, some changes have occurred, and the subsequent problems have not been completed. I will write something related to 4.0.