Go from Android 4.1 audio system Change instructions to Android 4.1, English code abbreviation for JB. In the eyes of the Chinese people, the word jb is also a little related to animals. Google so frequently modified Android, finally launched a can be all day jb JB hanging in the mouth of the version. Later my article can also use JB to express the version number, one side with JB said Chairman Mao often said "strategic contempt." Please, according to the context of my writing down the word jb mood. Today will be a bit more about the JB 4.1 in the audio system to make the most radical changes. Here's a few words: like Gen Y often complains that he's been late for a few years, there's going to be a lot of yards of farmers complaining about being in touch with Android. Why? JB audio system is very difficult compared to 4.0, 2.3, 2.2 is very very large. 99% of the cases where you don't see this NB (this is not swearing, 4.1 audio system There is a class called Nbaio, Basketball control do not make the wrong NBA, the original intention is Non-block Audio I/O. You see,
non-blocking I/O, you ask yourselves, how many people have a deep understanding of this thing? On the basis of the evolution of things, it is unlikely to be able to read the JB audio system. So, it is recommended that the 99% of those who have not seen the history of audio evolution of the students, first careful study (previously I only suggest that you see, now improve the requirements for careful study) "in-depth understanding of the Android Volume I" Audio system.
BTW, this book in a section specifically reminds everyone to study the various I/O models, I do not know how many people dick over me. This article will be divided into several parts, no prior to the draft, so it will be a bit messy. Start with the Java layer Audiotrack class.
A Audiotrack Java class change Description
- The number of channels, previously only mono (mono) and stereo (STEREO), now extends to the most NB eight channels (7.1 hifi AH). The parameter name is Channel_out_7point1_surround. Seeing this parameter, my chin fell bam. This thing, 1:30 I still do not understand what is a reason. Have the know the cock silk yards farmers may wish to tell everyone. Of course, the final output is still two-channel. Multi-channel (greater than 2) will use Downmixer processing (under transformation processing, students can search)
- Other changes have been made, but not much. I'm going to pick some eye-catching ones here. BTW, rest assured, will not be like that in the first show of Ze Lola, just let everyone see the big nostrils.
two audiotrack jni layer change DescriptionThis layer includes the JNI layer and the audiotrack itself
The
- JNI layer does not change much. The
- Audio native Core code is moved to FRAMEWORK/AV. Yes, you read it right. It's really av. This is the JB audio a relatively large change. The Audio native core code is all moved to the Frameworks/av directory. The
- Audiotrack adds a variable that controls the scheduling priority of the process that uses it (in the first place, it is true that the nicer value is set). If it is playing, the process scheduling priority is set to Android_priority_audio. It's like when you see a mosaic. I'd like to make a few extra remarks here. In the case of a single-core CPU, it is foolish to set the priority (the value of Android_priority_audio is-16, the high priority, the single-core setting of a monster so high, not knowing how other apps play.) If you don't know what I'm talking about, read this article first, http://blog.csdn.net/innost/article/details/6940136. But now the 2-core, 4-core has been more common, here you can play scheduling things. The real test of the Dick Silk Yard is: multi-core parallel programming, the principle of Linux OS, you need to learn the dick Silk classmates. Audio has not been so easily abused by you. In addition, low-end mobile phone, please do not transplant 4.1, this really is not low-end can play. The
- Audiotrack is promoted to father. JB defines a inexplicably timedaudiotrack subclass for it. This class is used in the decoding of the AAH_RTP (I do not know what AAH is now) inside. From the comments, the class is an audio output interface with timestamps (which can be synchronized with time stamps). Detailed understanding of the words, it is necessary to combine specific application scenarios to analyze (mainly RTP this piece). The students who make up the decoding, hold on!
- Another super-complicated change is that audio defines several output flags (see Audio.h's audio_output_flags_t enumeration definition). According to the note, this value has two functions, and one is that the user of at is able to indicate what kind of outputdevice they want to use. The other is the device manufacturer can declare its own supported output devices (it appears that the device initialization, but also add parameter reading and configuration work). However, from the definition of the enumeration, I do not see what it has to do with hardware. It defines the following values:
typedef enum {Audio_output_flag_none = 0x0,//No Attributesaudio_output_flag_direct = 0x1,//This OUTPUT directly Connec TS a track//to one output stream:no software mixeraudio_output_flag_primary = 0x2,//This output is the PRIMARY output of//the device. It is unique and must be//present. It is opened by default and//receives routing, audio mode and volume//controls related to voice calls. Audio_output_flag_fast = 0x4,//OUTPUT supports "fast tracks", "= = what is fast track?" It's too hard to understand! Currently, the Java layer's audiotrack only uses the first flag. Defined Elsewhereaudio_output_flag_deep_buffer = 0x8//Use deep audio buffers what is a ==deep BUFFER? Isn't this mosaic too big? Now it's completely unclear??! } audio_output_flags_t;
- Audiotrack other changes are small. AudioTrack.cpp a total of more than 1600 lines, so easy!
OK, there are several mosaics, the usual look at the Japanese blockbuster is also the past, but the analysis of audio can not. The hope of the mosaic is pinned on the next Audioflinger analysis!
Three Audioflinger change DescriptionWe will introduce the following changes according to the main process of AF work:
- AF creation, including its onfirstref function
- Openoutput functions and the creation of Mixerthread objects
- Audiotrack Call Createtrack function
- Audiotrack Call the Start function
- AF mix, then output
3.1 af creation and OnfirstrefWell, it doesn't change much. There are three points:
- Now the volume of the primary device is more granular control, such as some devices can set the volume, some can not set the volume, so defined a master_volume_ Support (AUDIOFLINGER.H) enumeration used to determine the volume control capability of a primary device.
- The standby time of the previous playback process (which is used for power saving) is written dead and can now be controlled by Ro.audio.flinger_standbytime_ms, and if not, the default is 3 seconds. AF also adds additional variable controls, such as a gscreenstate variable, to indicate whether the screen is on or off. Can be controlled by audiosystem::setparameters. Also defined is a Bluetooth SCO-related Mbtnrecisoff variable, is used to control the Bluetooth SCO (recording, Bluetooth, a professional term called, Nrec. I don't know what it is, it is forbidden to use AEC and NS special effects when you know someone to tell me. Please refer to AudioParameter.cpp
3.2 openoutput FunctionOpenoutput function is key, which meets with old friends Mixerthread,audiostreamoutput and so on. The entire process involves loading audio-related hardware so. This part of the work at 4.0, there is no too much change. But the old friends have changed a lot. First look at the Mixerthread family. Figure 1 Playbackthread family Figure 1 a little explanation:
- Threadbase is derived from thread, so it will run in a separate line approached (wordy, threads and objects do not really matter, do not understand the code of multithreaded programming please be sure to seriously learn multithreading). It defines an enumeration type_t that is used to represent the types of subclasses, which include mixer,direct,record,duplicating, and so on. This should be more understood, right?
- Threadbase's inner class trackbase derives from Extendedaudiobufferprovider, and this should be a new addition. Trackbase, it is good to understand it as a buffer container.
- Threadbase's internal class pmdeathrecipient is used to listen for powermanagerservice death messages. This design a bit, because the PMS run in Systemservice, only SS Hung, PMS will hang. and SS Hung, MediaServer will also be init.rc rules to kill, so Audioflinger will also die. Since everyone died together, it was very fast. So, what is the significance of setting this pmdeathrecipient?
Then look at an important subclass of Threadbase Playbackthread, this class should have done a big facelift.
- It defines an enumeration mixer_state that reflects the state of the current mix work, with Mixer_idle,mixer_ready and mixer_enabled
- Several virtual functions are defined, and subclass implementations are required, including threadloop_mix,preparetracks_l and so on. The abstract work of these functions is still possible. But the change is so big that people can't help it.
- The track class adds a derivation from Volumeprovider, which is used to control the volume. According to the previous introduction, in JB, the volume management is more meticulous than before
- A new definition of timedtrack. The role of this class is related to the RTP aah mentioned earlier. And so the students finish this article, you can carry out the corresponding research, the annihilation!
Next look at Figure 2. Figure 2 Mixerthread and its Brethren Figure 2, a brief introduction:
- Mixerthread derived from Playbackthread, this relationship will not change from beginning to end, I believe it will not be.
- The largest variation of Mt is one of several important member variables. You must know the audiomixer, it is used for mixing.
- A new Soaker object (controlled by a compiler macro) is added, which is a thread. The prefix of this word soak in the Webster dictionary (I believe that the most appropriate explanation for those who have experienced the GRE days and those who know what is Webster) is to cause to pay an exorbitant amount. Still don't quite understand why? Another look at the code. Originally, Soaker is a full-time thread playing CPU. Its job is to continue to do operations, pull high CPU usage. Its existence should be to test the efficiency of the new AF framework on multicore CPUs and so on. So, low-end smart machines, you don't play JB anymore.
- Another proof that the low-end intelligent machine can not play the JB's evidence is: We see a new fastmixer in MT, it is also a thread. I see? In JB, the multi-core smart machine, the mixing work can be placed on the fastmixer of the thread to do, of course, speed, efficiency will be high.
- Fastmixer workflow is more complicated, and it involves multithreading synchronization. So, this defines a fastmixerstatequeue, which is obtained by the TypeDef statequeue<fastmixerstate>. First it is a statequeue (simply consider it an array). The type of its array element is fastmixerstate. A statequeue holds 4 fasetmixerstate members through the mstats variable.
- Fasetmixerstate a similar state machine, there is an enum Command, which is used to control the state. The fastmixerstate contains a fasttracks array of eight tuples. FastTrack is a functional class used to complete the fastmixer.
- Each fasttrack has a mbufferprovider, which is a member type of Sourceaudiobufferprovider.
The above content is already more complex, the following describes the next Mixerthread object creation encountered in the other content:
3.3 Mixerthread CreateAs shown in Figures 1 and 2, several key members of AF should be recognized. Unfortunately, there is a Moutputsink member in the mixerthread above, don't you see? It has a significant relationship with the Nbaio (Non-block Audio I/O) we mentioned earlier. The presence of Nbaio is intended to enable non-blocking audio input and output operations. Here is the comment for this class: Nbaio Comment://This header file has the abstract interfaces only. Concrete implementation classes is declared//elsewhere. Implementations _should_ be non-blocking for all methods, especially read () and//write (), but the is not enforced. In general, implementations don't need to being multi-thread//safe, and any exceptions is noted in the particular implemen Tation. Nbaio just defines an interface that needs to implement a specific implementation class. Of course, it requires the Read/write function is non-blocking, the real implementation is not blocking, by the implementation to control. Personal feeling this part of the framework is not fully mature, but the introduction of Nbio, students need to be careful, relatively speaking, the difficulty is relatively large. Let's take a look at some of Nbaio's content in figure. Figure 3 Nbaio related content Figure 3 is explained as follows:
The
- Nbaio includes three main classes, one is Nbaio_port, which represents the I/O endpoint, which defines a negotiate function for parameter coordination between the caller and the I/O endpoint. Note that the parameters are not set for the I/O endpoint. Because I/O endpoints tend to be hardware-related, some hardware parameters cannot be changed as often as software. For example, the hardware only supports a sample rate of up to 44.1KHZ, and the caller passes a 48KHz sample rate, which directly requires a negotiation and matching process. This function is more difficult to use, the main rule is more. Students can refer to their explanatory notes. The
- nbaio_sink corresponds to the output endpoint, which defines the write and Writevia functions, and the Writevia function needs to pass a callback function via, which internally calls the VIA function to fetch the data. Push/Pull mode with similar data. The
- nbaio_source corresponds to the input endpoint, which defines the read and Readvia functions. Meaning with Nbaio_sink. The
- defines a monopipe and monopipereader. Pipe is the pipeline, Monopipe and Linux IPC Communication pipe has no relationship, but borrowed the concept and ideas of the pipeline. Monopipe is the pipe (AF, which is Monopipereader) that supports only a single reader. These two pipes represent the output and input endpoints of audio. The
- MT is directed by Moutputsink to Audiostreamoutsink, which is derived from Nbaio_sink for normal mixer output. Mpipesink points to Monopipe, which is intended for fastmixer. In addition, there is a variable mnormalsink, which will point to Mpipesink, or moutputsink, according to the fastmixer situation. The logic for this control is as follows:
Switch (kusefastmixer) {//kusefastmixer is used to control fastmixer usage, altogether 4 kinds: case Fastmixer_never://Never use fastmixer, this option is used for debugging, That is, closed fastmixer case fastmixer_dynamic://According to the situation, dynamic use. According to the note, this function does not seem to be fully implemented good Mnormalsink = Moutputsink;break;case fastmixer_always://Always use Fastmixer, debug with Mnormalsink = Mpipesink ; break;case fastmixer_static://Static. This is the default. But specifically whether to use Mpipesink, will receive initfastmixer control Mnormalsink = initfastmixer? Mpipesink:moutputsink;break;} As described above, the kusefastmixer default is fastmixer_static, but mnormalsink points to Mpipesink and is also controlled by Initfastmixer. This variable itself has mframecount and mnormalframecount size decision, only mframecount less than Mnormalframecount, Initfastmixer is true. Dizzy .... These two framecount were obtained by Playbackthread's readoutputparameters. Ask the students to study this code, it is some simple calculation. If you want to figure it out, it's best to take the parameters and figure out the values. Well, the creation of Mixerthread to analyze this, it is best to put this code more research. Find out what a few brothers objects are for ....
3.4 createtrack and start instructionsThe biggest change in Createtrack is the new processing of the mediasyncevent synchronization mechanism. The purpose of the mediasyncevent is simple, and its Java API is interpreted as follows: Startrecording (mediasyncevent) is used to start capture if the playback on a PA Rticular Audio session is complete. The audio session ID is retrieved from a player (e.g MediaPlayer, audiotrack or Tonegenerator) by use of the Getaudiosessi Onid () method. Simply put, you must wait until the last player has finished working to start the next playback or recording. This mechanism solves the problem that Android's long-standing voice often mixes (a nasty but effective way is to add a sleep to stagger multiple player's out of sync problems. )。 Note that this problem is not on the iphone. In addition, the potential benefit of this mechanism is to liberate the students who do audiopolicy audioroute work,
seems(Personal feeling can solve this problem) can not go to the end sleep how much time, where add sleep problem in AF, mediasyncevent mechanism is the representative of Syncevent. Let's see for yourselves. The start function does not change much, which adds to the processing of the syncevent. In addition, Createtrack also involves fastmixer and timedtrack processing. The core is in Playbackthread's createtrack_l and track constructors. Especially the relationship with Fastmixer. According to Figure 2,FM (Fastmixer shorthand) internal data structure is FastTrack, and MT is used to track, so there are one by one corresponding relations. The FM fasttrack is stored in the array, so the track using FM will point to the FastTrack by Mfastindex. Now figure out the relationship between FastTrack and track, and the subsequent flow of data will need to be discussed in detail below to see Mixerthread's workflow. This part is the highlight!
3.5 mixerthread WorkflowThis part of the difficulty is still in the fastmixer work principle.
but here in advance and everyone said: The current function has not been finished, the code inside a bunch of fixme .... But the cock silk do not happy too early, estimated immediately, soon, must have the next version is good. Now look at this immature thing, can alleviate the later to see the mature things of psychological pressure. MT is a thread whose work is mainly done in threadloop, and this function is defined by its base class Playbackthread, which is broadly changed as follows:
- Playbackthread's Threadloop defines the overall process of audio processing, with details being given to subclasses through several virtual functions (such as preparetracks_l,threadloop_mix,threadloop_write) to achieve
- MT changes the big first is the preparetracks_l, first processing is the Fastmix type of track, the judging standard is whether the track set the Track_fast sign (cool, there is no place in JB currently used this symbol). This part of the judgment is more complicated. First Fastmixer maintains a state machine, and in addition, this fastmixer runs in its own thread, so thread synchronization is a must. The state is used here to control the Fastmixer workflow. Because of the multi-threading involved, so the underrun,overrun state of the audio (do not know what is it?) Look at the reference books mentioned earlier! ) is also a thorny issue that needs to be addressed. In addition, a MT is a Audiomixer object, this object will complete the data mixing, down transformation and so on Super difficulty, digital audio processing and other aspects of the work. In other words, for the mix, the pre-prepare work is still done by the MT thread, as this allows for unified management (some track does not need to use fastmixer. But think carefully, who want to deal with the faster the better, on the multi-core CPU, the mixing work to multiple threading is the best use of CPU resources, this should be the trend of future Android evolution. So, I guess this JB is not fully grown .... )。 For those interested in fastmixer, be sure to study the preparetracks_l function carefully.
- The next important function of MT is threadloop_mix, because there is a timedtrack class, then the audiomixer process function takes a timestamp, pts,presentation timestamp. From the codec point of view, there is also a dts,decode timestamp. Here's the difference between PTS and DTS. DTS is the decoding time, but when encoded, it is possible to encode the current frame based on future frames. So, the decoding will first solve the future frame, and then solve the current frame, but. You can not broadcast the future frame when you play. You can only honestly play the current frame in the playback order, and then broadcast the future frame (although the next frame is solved first). About Pts/dts, please study the relevant knowledge of IBP. Back to MT, this PTS is taken from the hardware Hal object and should be the timestamp of the HAL's internal maintenance. This timestamp will be more accurate in principle.
- After mixing, do the special effects (similar to the previous version), and then call Threadloop_write. The output endpoint of MT's Threadloop_write function is the mnormalsink of the previous pit father, and if not, it is called the Write function. Think of the non-blocking write function that calls Nbaio_sink. According to the analysis of Figure 2, it is possible that the monopipe, also may be audiostreamoutputsink, the sink node is used to be the former audiostreamoutput. And Monopipe's write is a buffer inside. There is no relationship with the real Audio HAL output. This..... What's the whole?? (bold hypothesis, careful verification.) It is only fastmixer to take this buffer out and then write it to the Real audio HAL. Because in the Mixerthread constructor, once the Moutputsink was saved for FastTrack, this is used to contact Audiostreamoutput)
In addition, Dulicatingthread,directouptutthread has not changed much.
Four Fastmixer working principle simple explanationI used to think that the mix work is done by the fastmixer thread and the Mixerthread thread, but the output is still working mixerthread. Judging from the above monopipe analysis, this judgment may not be allowed. It is possible that the output is also given to Fastmixer, while Mixerthread does only part of the mix, and then passes the data through monopipe to the fastmixer thread. The Fastmixer thread mixes the mix result of its own fasttrack and the mix of MT, and then outputs it by fastmixer. FM definition in FastMixer.cpp, the core is a threadloop. Since the work of the AF all track is done by the MT thread, the FM Threadloop is basically based on the state to do the corresponding processing. The synchronization here uses a very low-level Futex (Fast userspace Mutex) from Linux. Halo, Futex is the basis for POSIX mutex implementations. Do not know how to write this code is not the person directly with the mutex (estimated or too inefficient problem, but mom, using the mutex efficiency how much worse?) The code is written for people to see, too B4 us ... )。 Play multi-threaded play to this point, admire Ah! Do not understand multithreaded programming, please carefully study the POSIX multithread programming bar
- A audiomixer is also used inside the fastmixer for its mixing
- Then write it out .....
Here is a brief description of the FM, detailed content, did not take a real machine to me, I can not complete ah .... Welcome to the Charitable brothers brush a 4.1 machine, and then lend me to study under ... (this thing, personal feeling is not too difficult.) Things, can not resist pondering, always get through the.) Brothers today know the general work flow of FM and Mt.
Five other changesOther changes include:
- Very focused on debugging, adding a lot of xxxdump class. It seems that Google has encountered many problems in its own development. Simple function, who would want to dump it?
- The Audiowatchdog class is added to monitor AF performance, such as CPU usage.
Vi. SummaryI remember when studying 2.2 AF, Audioflinger only 3k more lines, and JB already has more than 9K line. There are no other auxiliary classes. On the whole, the trend of JB Change is:
- To make full use of multi-core resources, the emergence of fastmixer is inevitable. Also includes the Nbaio interface. It feels like a big challenge to Hal writing.
- Adding Timedtrack and Syncevent will provide a better user experience for the synchronization between RTP or multiple player.
- Increase the native layer to the Java Layer notification interface.
Android 4.1 Audio System Change description