Android 4.1 Audio system change description

Source: Internet
Author: User

Android 4.1, abbreviated as JB. In Chinese eyes, the word JB is also related to animals. Google modified Android so frequently, and finally launched a version that can be put on the mouth of JB all day long. In the future, my articles can also use JB to express the version number, while JB to express what Chairman Mao often said, "Strategic contempt". Please try to understand my mood for writing the word JB according to the context. Today, we will give a brief introduction to the earth-shaking changes that JB 4.1 has made in the Audio system. Here, I would like to say a few words: just as I often complained after the 80 s that I had been born for a few years later, many coders will immediately complain that it was too late to contact Android. Why? The JB Audio system is more difficult than 4.0, 2.3, and 2.2. In 99% cases, you didn't see this NB (this is not a foul word. In 4.1 Audio systems, there is a class called NBAIO. Do not make the basketball controller mistake into the NBA, the original intention is Non-block Audio I/O. See, Non-blocking I/OAsk yourself, how many people have a deep understanding of this thing ?) It is unlikely that the JB Audio system can be understood based on the evolution of things. Therefore, it is recommended that the students who have never seen the evolution history of Audio be carefully studied in the 99% S. (in the past, I only recommended that you take a look at it. Now the requirement for improvement is carefully studied) an in-depth understanding of Android volume I Audio system. BTW, in a specific chapter, this book reminds everyone to study various I/O models. I don't know how many people have been jealous of me.This article will be divided into several parts, there is no draft beforehand, so it will be a bit messy. Start with the Java-layer AudioTrack class. I. AudioTrack Java class change description
  • In terms of the number of audio channels, MONO and STEREO were used in the past, and now they are extended to the NB eight channels (7.1 HiFi AH ). The parameter name is CHANNEL_OUT_7POINT1_SURROUND. When I see this parameter, my chin crashes. At half past one, I still don't know what it means. We may wish to tell you what we know. Of course, the final output is still dual-channel. When the multi-channel (greater than 2), downmixer will be used for processing (lower conversion processing, which can be searched by students)
  • There are other changes, but not big. I would like to pick some eye-catching ones here. BTW, rest assured, will not let everyone see the big nostrils just like the first show of yanze Luola.
Description of changes in the JNI layer of AudioTrackThis layer includes the JNI layer and AudioTrack itself.
  • The JNI layer does not change much.
  • The core code of Audio Native is moved to framework/av. Yes, you are not mistaken. It's really av. This is a big change in JB Audio. All the core Audio Native code is moved to the frameworks/AV directory.
  • AudioTrack adds a variable to control the scheduling priority of the processes that use it (the nicer value is indeed set here, which is incorrect in the previous article ). If the player is playing, the priority of the process scheduling is set to ANDROID_PRIORITY_AUDIO. Just as you can see the mosaic. I would also like to say a few words here. In the case of single-core CPU, It is silly to set the priority (ANDROID_PRIORITY_AUDIO value is-16, the priority is extremely high, single-core sets such a high monster, do not know how to play other apps. If you don't know what I'm talking about, read this article first, http://blog.csdn.net/innost/article/details/6940136 ). But now 2-core and 4-core are quite common. Here we can play with scheduling. The true test for silk coders is: multi-core parallel programming, the principle of linux OS, You need to master silk coders. Audio is no longer so easily abused by you. In addition, the low-end mobile phone, please do not transplant 4.1, this is really not low-end can play.
  • AudioTrack is upgraded to a father. JB defines an inexplicable TimedAudioTrack subclass for it. This class is used in codec aah_rtp (I do not know what aah is. In terms of annotations, this class is an audio output interface with a timestamp (with a timestamp, You can synchronize it. For a detailed understanding, we need to analyze the specific application scenarios (mainly rtp ). Students who are engaged in codec coding and decoding should hurry up!
  • Another extremely complex change is that Audio defines several output flags (see audio_output_flags_t enumeration definitions of audio. h ). According to the annotation, this value has two functions. One is that AT users can specify what outputDevice they want to use. The other is that the device manufacturer can declare the output device that it supports (it seems that the parameter reading and configuration are added during device initialization ). However, from the definition of this enumeration, I still cannot see its relationship with hardware. It defines the following values:
Typedef enum {AUDIO_OUTPUT_FLAG_NONE = 0x0, // no attributesAUDIO_OUTPUT_FLAG_DIRECT = 0x1, // this output directly connects a track // to one output stream: no software mixerAUDIO_OUTPUT_FLAG_PRIMARY = 0x2, // this output is the primary output of // the device. it is unique and must be // present. it is opened by default and // your es routing, audio mode and volume // controls related to voice cballs. AUDIO_OUTPUT_FLAG_FAST = 0x4, // output supports "fast tracks", = What is fast track? It's hard to understand! Currently, java-layer audiotrack only uses the first flag. // Defined elsewhereAUDIO_OUTPUT_FLAG_DEEP_BUFFER = 0x8 // use deep audio buffers what is = deep buffer? Is this mosaic too big? I can't see it clearly now ??!} Audio_output_flags_t;
  • Other changes in AudioTrack are not significant. AudioTrack. cpp only has more than 1600 rows in total, so easy!
Okay, there are several mosaics on it. When I look at Japanese movies, I will see them again, but I cannot analyze Audio. Pin the hopes of de-mosaic on the analysis of AudioFlinger in the next step! Description of change to AudioFlingerWe will introduce the changes based on the main process of AF work:
  • AF creation, including its onFirstRef Function
  • OpenOutput function and MixerThread object Creation
  • AudioTrack calls the createTrack Function
  • AudioTrack calls the start Function
  • AF audio mixing and Output
3.1 AF creation and onFirstRefWell, there is no big change. There are three points:
  • Now we have more detailed control over the volume of the Primary device. For example, some devices can set the volume, and some cannot set the volume, so we define a master_volume_support (AudioFlinger. h) enumeration, used to determine the volume control capability of the Primary device.
  • The standby time in the previous playback process (used for power saving) is hard to write, and can now be controlled by ro. audio. flinger_standbytime_ms. If this attribute is not available, the default value is 3 seconds. AF also adds other variable controls. For example, a gScreenState variable is used to indicate whether the screen is on or off. You can use AudioSystem: setParameters to control the parameters. The mBtNrecIsOff variable related to Bluetooth SCO is also defined, which is used to control Bluetooth SCO (used during recording, a specialized term on Bluetooth called NREC. If you don't know what it is, tell me) Disable AEC and NS special effects. See AudioParameter. cpp
3.2 openOutput FunctionsThe openOutput function is critical, including the old friend MixerThread and AudioStreamOutput. The entire process includes loading Audio-related hardware so. This part of work came into being in 4.0, not to mention too many changes. However, old friends have changed dramatically. Let's first look at the MixerThread family.

Figure 1 PlaybackThread family figure 1 explained a little:
  • ThreadBase is derived from Thread, so it runs in a separate Thread, programmers who do not understand multi-thread programming must learn multi-thread carefully ). It defines an enumeration type_t to represent the type of the subclass. These types include MIXER, DIRECT, RECORD, DUPLICATING, and so on. Should this be easy to understand?
  • The internal class TrackBase of ThreadBase is derived from ExtendedAudioBufferProvider, which should be newly added. TrackBase. You can understand it as a Buffer iner.
  • The internal class PMDeathRecipient of ThreadBase is used to listen for dead messages of PowerManagerService. This design is a bit tricky, because PMS runs in SystemService. Only when SS crashes will PMS fail. When the SS crashes, the mediaserver will be killed by the init. rc rules, so AudioFlinger will also die. Since everyone is dead together, the speed is very fast. Therefore, what is the significance of setting this PMDeathRecipient?
Let's take a look at PlaybackThread, an important subclass of ThreadBase. This class should be too big.
  • It defines an enumerative mixer_state to reflect the current mixing status, including MIXER_IDLE, MIXER_READY, and MIXER_ENABLED.
  • Several virtual functions are defined and must be implemented by sub-classes, including threadLoop_mix and prepareTracks_l. The abstract work of these functions can still be done. However, the great changes are hard to prevent.
  • The Track class has been added to derive from VolumeProvider, which is used to control the volume. According to the previous introduction, in JB, Volume management is more detailed than before.
  • TimedTrack is added. The role of this class is related to the rtp aah mentioned above. After learning this article, you will be able to carry out relevant research and initiate a war of annihilation!
See figure 2.

Figure 2 MixerThread and its brethren Figure 2 briefly introduces:
  • MixerThread is derived from PlaybackThread. This relationship will not change from the beginning to the end, and I believe it will not happen in the future.
  • The biggest change in MT is several important member variables. You must know the AudioMixer, which is used for sound mixing.
  • Add a Soaker object (controlled by the compilation macro), which is a thread. The prefix of this word soak is in the webster Dictionary (I believe people who have experienced the GRE years know what webster is) the most appropriate explanation is to cause to pay an exorbitant amount. Why? Let's look at the code. It turns out that soaker is a dedicated CPU-playing thread. It constantly performs operations to increase the CPU usage. Its existence should be to test the efficiency of the new AF framework on multi-core CPU and so on. Therefore, you should stop playing JB on low-end smart phones.
  • Another proof that low-end smart machines cannot play JB is: we can see that a FastMixer is added to MT, which is also a thread. Understand? In JB, on a multi-core smart machine, mixing can be done in the thread where FastMixer is located. Of course, the speed and efficiency will be high.
  • The FastMixer workflow is complicated and involves multi-thread synchronization. Therefore, a FastMixerStateQueue is defined here, which is obtained by typedef StateQueue <FastMixerState>. First, it is a StateQueue (simply think of it as an array ). The array element type is FastMixerState. One StateQueue saves four FasetMixerState members through the mStats variable.
  • FasetMixerState is similar to a state machine and has an enum Command to control the state. FastMixerState contains the FastTracks array of an eight-element group. FastTrack is a function class used to complete FastMixer.
  • Each FastTrack has an mBufferProvider, whose member type is SourceAudioBufferProvider.
The above content is already complex. Next we will introduce other content encountered during the creation of the MixerThread object: 3.3 create MixerThreadThrough figures 1 and 2, we should have an understanding of several main members of the AF. Unfortunately, there is also a mOutputSink member in the MixerThread above. Didn't you see it? It has a major relationship with the NBAIO (Non-block Audio I/O) we mentioned earlier. NBAIO exists to achieve non-blocking audio input and output operations. The following is the annotation of This class: NBAIO annotation: // This header file has the abstract interfaces only. concrete implementation classes are declared // elsewhere. implementations _ shocould _ be non-blocking for all methods, especially read () and // write (), but this is not enforced. in general, implementations do not need to be multi-thread // safe, and any exceptions are noted in the particular implementation. NBAIO only defines an interface and requires implementation. Class. Of course, it requires that the read/write function is non-blocking. Whether the actual implementation is blocking or not is controlled by the implementer. I personally feel that this part of the framework is not yet completely mature, but the introduction of NBIO requires the attention of the students, which is relatively difficult. Figure 3 shows some NBAIO content.

Figure 3 NBAIO-related content figure 3 is explained as follows:
  • NBAIO consists of three main classes. One is NBAIO_Port, which represents the I/O endpoint. A negotiate function is defined for parameter coordination between the caller and the I/O endpoint. Note that the parameter is not set for the I/O endpoint. Because I/O endpoints are often related to hardware, some hardware parameters cannot be changed at will like software. For example, the hardware only supports a sampling rate of 44.1KHZ at most, while the caller transmits the sampling rate of 48 khz, which directly requires a process of negotiation and matching. This function is difficult to use, mainly because there are many rules. Students can refer to their notes.
  • NBAIO_Sink corresponds to the output endpoint, which defines the write and writeVia functions. The writeVia function needs to pass a callback function via, which internally calls this via function to obtain data. It is similar to two data push/pull modes.
  • NBAIO_Source corresponds to the input endpoint, which defines the read and readVia functions. The meaning is the same as that of NBAIO_Sink.
  • Define a MonoPipe and MonoPipeReader. Pipe is the Pipe. The Pipe communication between MonoPipe and IPC in LINUX is irrelevant, but it only borrows the concept and idea of this pipeline. MonoPipe only supports Pipe for a single reader (in AF, It is MonoPipeReader ). These two pipelines represent the Output and Input endpoints of Audio.
  • In MT, mOutputSink points to AudioStreamOutSink, which is derived from NBAIO_Sink and used for normal mixer output. MPipeSink points to MonoPipe, which is intended for FastMixer. In addition, there is a variable mNormalSink that points to mPipeSink or mOutputSink Based on FastMixer. The logic of this control is as follows:
Switch (kUseFastMixer) {// kUseFastMixer is used to control the use of FastMixer. There are four types: case FastMixer_Never: // never use FastMixer. This option is used for debugging, that is, when FastMixer is disabled, case FastMixer_Dynamic: // use it dynamically based on the situation. According to the notes, this function does not seem to fully implement mNormalSink = mOutputSink; break; case FastMixer_Always: // use FastMixer forever, and use mNormalSink = mPipeSink; break; case FastMixer_Static: // static. This is the default value. However, if you use mPipeSink, will you receive the initFastMixer control mNormalSink = initFastMixer? MPipeSink: mOutputSink; break;} as described above, kUseFastMixer is FastMixer_Static by default, but whether mNormalSink points to mPipeSink is also controlled by initFastMixer. This variable is determined by the size of mFrameCount and mNormalFrameCount. initFastMixer is true only when mFrameCount is smaller than mNormalFrameCount. Dizzy... the two frameCount are obtained by readOutputParameters of PlaybackThread. Ask the students to study this code by themselves, which is simple computing. If you want to understand it, it is best to include the parameters and calculate the values. Well, the creation of MixerThread will be analyzed here. It is best to study this code more. Know what a few sibling objects are doing .... 3.4 description of createTrack and startThe biggest change in createTrack is the addition of the MediaSyncEvent synchronization mechanism. The purpose of MediaSyncEvent is very simple. Its Java API is interpreted as follows: startRecording (MediaSyncEvent) is used to start capture only when the playback on a special audio session is complete. the audio session ID is retrieved from a player (e. g MediaPlayer, AudioTrack or ToneGenerator) by use of the getAudioSessionId () method. to put it simply, you must wait for the previous player to finish working before you can start the next playback or recording. This mechanism solves the problem that Android sounds often mix up for a long time (currently, a disgusting but effective method is to add a sleep to stagger the problem of multiple players not synchronizing .). Note that this problem does not occur on the iPhone. In addition, the potential benefit of this mechanism is the liberation of students who work on AudioPolicy AudioRoute, It seems(I personally think it can solve this problem) You don't have to worry about the sleep time. In AF, The MediaSyncEvent mechanism represents SyncEvent. Let's take a look. The start function does not change much, and SyncEvent is added to it. In addition, FastMixer and TimedTrack processing are involved in createTrack. The core is in the createTrack_l and Track constructor of PlaybackThread. Especially the relationship with FastMixer. According to figure 2, the internal data structure of FM (FastMixer short) is FastTrack, while MT uses Track. Therefore, there is a one-to-one correspondence between them. The FastTrack of FM is saved in the array, so the Track using FM will point to this FastTrack through mFastIndex. Now we can figure out the relationship between FastTrack and Track. We need to discuss the following details about the subsequent data flow to see the workflow of MixerThread. This part is the most important thing! 3.5 MixerThread WorkflowThe difficulty lies in the working principle of FastMixer. However, I would like to tell you in advance that this function has not been completed yet, and there is a bunch of FIXME in the code ....But don't be happy too early. It's estimated that it would be good to have the next version right away. Now, looking at this immature thing can relieve the psychological pressure on the mature things. MT is a thread whose work is mainly completed in threadLoop. This function is defined by its base class PlaybackThread. The general changes are as follows:
  • ThreadLoop of PlaybackThread defines the general process of audio processing. The details are implemented by sub-classes through several virtual functions (such as prepareTracks_l, threadLoop_mix, and threadLoop_write ).
  • The first major change in MT is prepareTracks_l. The first step is to handle FastMix-type tracks and determine whether the Track has set the TRACK_FAST flag, currently, this flag is not used in JB ). This part of judgment is complicated. FastMixer first maintains a state machine, and the FastMixer runs in its own thread, so thread synchronization is required. Here, the State is used to control the workflow of FastMixer. Due to multithreading, the audio underrun and overrun statuses (do not know what it is? See the reference books mentioned above !) It is also a tough issue to deal. In addition, a MT object carries an AudioMixer object, which completes data mixing, downconversion, and other ultra-difficult tasks, such as digital audio processing. That is to say, for audio mixing, the prepare work in the early stage is still completed by the MT thread, because it can achieve unified management (some tracks do not need to use FastMixer. But when we think about it, everyone wants to process it faster and better. On a multi-core CPU, delivering the mixing work to multiple threads is a good example of making full use of CPU resources, this should be the future trend of Android evolution. So I guess this JB hasn't grown up ....). If you are interested in FastMixer, you must carefully study the prepareTracks_l function.
  • The next important function of MT is threadLoop_mix. Because there is a TimedTrack class, the process function of AudioMixer carries a timestamp, PTS, and presentation timestamp. From the codec perspective, there is also a DTS, Decode timestamp. The difference between PTS and DTS is worth mentioning. DTS is the decoding time, but the current frame may be encoded according to the future frame. Therefore, when decoding, the next frame is first parsed, and then the current frame is decoded,. You cannot play future Frames first during playback. You can only play the current frame in the playing order, and then play the future frame (although the future frame is first obtained ). For PTS/DTS, Please study IBP-related knowledge. Back to MT, this PTS is obtained from the hardware hal object and should be the timestamp maintained by HAL. In principle, this timestamp is accurate.
  • After mixing, perform special effects (similar to the previous version) and call threadLoop_write. The output endpoint of the threadLoop_write function of MT is the groose mNormalSink. If it is not empty, the write function is called. The thought is to call NBAIO_Sink's non-blocking write function. According to the analysis in Figure 2, it may be the MonoPipe or AudioStreamOutputSink. This sink node uses the former AudioStreamOutput. The write of MonoPipe is a buffer internally. There is no link to the actual audio hal Output. What about this ?? (Bold assumptions, careful proof. The buffer can only be obtained by FastMixer, and then written to the Real Audio HAL. Because in the MixerThread constructor, mOutputSink was saved for FastTrack, which is used to contact AudioStreamOutput)
In addition, there is not much change in DulicatingThread and DirectOuptutThread. Iv. Simple Description of how FastMixer worksI previously thought that the mixing work was completed by both the FastMixer thread and the MixerThread thread, but the output work is still done by the MixerThread. From the MonoPipe analysis above, this judgment may be inaccurate. It may be that the output work is also done by FastMixer, while MixerThread only performs some mixing work, and then transmits the data to the FastMixer thread through MonoPipe. The FastMixer thread re-mixes the FastTrack audio mixing result and the MT audio mixing result, and then outputs the data by FastMixer. FM is defined in FastMixer. cpp. The core is a ThreadLoop. Since the MT thread is used to prepare all the Track tasks of the AF, The threadLoop of the FM basically performs corresponding Processing Based on the status. In this example, the underlying futex (Fast Userspace Mutex) in LINUX is used for synchronization ). Futex is the implementation basis of POSIX Mutex. I don't know why the developer who writes this Code directly uses Mutex (it's probably still inefficient, but what's worse if Mutex is used? The code is written to people. It's too B4 for us ...). I admire playing with multiple threads! Programming, Posix MultiThread
  • FastMixer also uses an AudioMixer for its sound mixing.
  • Then write it out .....
Here is a simple description of FM. For details, I didn't get a real machine for me, and I couldn't make it all .... you are welcome to give me a chance to refresh a 4.1 machine and lend it to me for research... (This is not too difficult. Things. I can't bear to worry about it, but I can always figure it out ). Today, you know the general workflow of FM and MT. Other changesOther changes include:
  • Debugging is very important, and a large number of XXXDump classes are added. It seems that Google has encountered many problems during its own development. Who will think about dump a simple function?
  • Added the AudioWatchdog class to monitor AF performance, such as CPU usage.
SummaryI remember when I studied 2.2 AF, AudioFlinger had more than 3 K rows, while JB had more than 9 K rows. Other auxiliary classes are not included yet. Overall, the change trend of JB is:
  • To make full use of multi-core resources, the emergence of FastMixer is inevitable. It also includes the NBAIO interface. I feel that there are great challenges to writing HAL.
  • TimedTrack and SyncEvent are added, which provides a good user experience for RTP or multiple players.
  • Added the native-layer notification interface to the java-layer.
There are other things... come here today. The test of diaosi:
  1. Linux OS programming and POSIX programming must be proficient.
  2. Complex code analysis capabilities must be improved as soon as possible. Otherwise, it cannot be understood.
Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.