http://blog.csdn.net/xuesen_lin/article/details/8805108
1.1 Audiopolicy Service
In the Audioflinger section, we repeatedly emphasize that it is only the executor of the strategy, while Audiopolicyservice is the policy-makers. This separation method effectively reduces the coupling of the whole system, and provides a guarantee for each module to expand independently.
1.1.1 Audiopolicyservice Overview
There are many common sayings in Chinese that are related to strategy, such as "local conditions", "concrete analysis of specific problems", and the behavior that only follow Bingshu to make tactics in the war is also called "armchair" and bookworm. All of this tells us that it is important to understand the execution environment of a strategy, only to clearly define the "what is the problem" in order to formulate the correct policy to solve the problem.
There are many types of sounds in the Android system, and the specific categories are as follows:
/*audiomanager.java*/
public static Final intstream_voice_call = 0; /* Call Sound */
public static Final intstream_system = 1; /* System sound */
public static final int stream_ring = 2; /* Ringtones and SMS Tips */
public static Final intstream_music = 3; /* Music Playback */
public static Final intstream_alarm = 4; /* Alarm Bell */
public static Final intstream_notification = 5; /* Notify Sound */
/* The following are hidden types, not open for upper application */
public static Final Intstream_bluetooth_sco = 6; /* Call when connecting to Bluetooth */
public static Final intstream_system_enforced = 7; /* Mandatory system sounds, such as the mandatory requirements of some countries
The camera has a sound when taking pictures to prevent candid photography.
public static Final INTSTREAM_DTMF = 8; /* DTMF Sound */
public static Final Intstream_tts = 9; /* Text Tospeech (TTS) */
For so many types of audio, Audiopolicyservice faces at least the following issues:
l What the corresponding hardware devices are required to output the above type of sound
For example, a typical mobile phone, it has both the handset, headphone interface, and Bluetooth devices. Assuming that the music is played through the handset horn by default, when the user plugs into the headset, the strategy changes-from the headphone output to the handset, or, for example, when the headset is plugged in, the music should not be output from the horn, but when there is a ringtone, it needs to output audio from both the speaker and the headset. The development of these "audio strategies" is dominated by audiopolicyservice
L Routing Strategy for Sound
If you compare a music playback instance (such as playing a song in an SD card with MediaPlayer) to the source IP, the audio playback device found in the previous step is the destination IP. In the TCP/IP system, the final destination IP from the source IP usually needs to go through several router nodes, each router according to a certain algorithm to determine what the next matching node, so as to develop an optimal routing path, as shown in:
Figure 13?16 Router
Audiopolicyservice the problem to be solved is similar to the router. Because more than one audiointerface is likely to exist in the system, each audio interface contains several output, and each output supports several audio devices at the same time, which means that from the playback instance to the end device, The choice of audiointerface and output is required, and we call it the Audiopolicyservice routing feature.
L volume adjustment for each type of audio
Different types of audio, the volume of the adjustable range is not the same, for example, some are 0-15, while others are 1-20. And their default values are also different, and we look at the definitions in Audiomanager:
public static final int[] Default_stream_volume = new int[] {
4,//Stream_voice_call
7,//Stream_system
5,//stream_ring
One,//Stream_music
6,//Stream_alarm
5,//Stream_notification
7,//Stream_bluetooth_sco
7,//stream_system_enforced
One,//STREAM_DTMF
One//Stream_tts
};
We have a special section behind the volume adjustment section to introduce.
In order to make people have a perceptual understanding of audiopolicyservice, we have vividly expressed its relationship with Audiotrack and Audioflinger:
Figure 13?17 the relationship between Audiopolicyservice and Audiotrack and Audioflinger
The elements in this diagram include Audiopolicyservice, Audiotrack, Audioflinger, Playbackthread, and two audio devices (speakers, headphones). The relationship between them is as follows (especially note that the purpose of this example is simply to illustrate the relationship of these elements, not that the strategy in the diagram is the one used by the Android system):
L A Playbackthread output corresponds to a device
There are two devices in the ratio, and there are two playbackthread corresponding to it. The thread on the left is finally mixed and output to the speaker, and the right is output to the headset
L at a certain time, the same type of audio corresponding output device is unified
That is, if the current stream_music corresponds to a loudspeaker, all of that type of audio will be output to the speaker. Combining the above, we can also draw a conclusion that the same type of audio corresponding to the playbackthread is the same
L Audiopolicyservice Play the role of the route
Audiopolicyservice's role in the entire selection process is somewhat similar to that of a network router, and it has the right to decide which device the audio stream produced by a audiotrack will eventually go to, as if the router can determine which node the sender's package should be passed to according to a certain algorithm.
Next we will learn about the Audiopolicyservice from three aspects.
First, look at how Audiopolicyservice works from the startup process.
Secondly, we analyze in detail how Audiopolicyservice completes the "Routing function" in conjunction with the above diagram.
Finally, let's analyze what the default "routing strategy" is for Android.
1.1.2 Audiopolicyservice Start-up process
Remember the first time we analyzed the start of Audioflinger, have you ever seen the shadow of Audiopolicyservice? Yes, it and Audioflinger reside in the same program as follows:
/*frameworks/av/media/mediaserver/main_mediaserver.cpp*/
int main (int argc, char** argv)
{ ...
Audioflinger::instantiate ();
...
Audiopolicyservice::instantiate ();
Processstate::self ()->startthreadpool ();
Ipcthreadstate::self ()->jointhreadpool ();
}
So theoretically, Audioflinger and audiopolicyservice can be called directly by the function. In practice, however, they still communicate using standard binders.
Audiopolicyservice start Way and Audioflinger is similar, we do not repeat here, directly to see its constructor:
Audiopolicyservice::audiopolicyservice ()
: Bnaudiopolicyservice (), Mpaudiopolicydev (null), Mpaudiopolicy (NULL)
{
Charvalue[property_value_max];
const struct Hw_module_t*module;
int forced_val;
int RC;
...
RC =hw_get_module (audio_policy_hardware_module_id, &module);//step 1.
...
RC =audio_policy_dev_open (module, &mpaudiopolicydev);//step 2.
...
RC =mpaudiopolicydev->create_audio_policy (Mpaudiopolicydev, &aps_ops, this,
&mpaudiopolicy);//step3.
...
RC =mpaudiopolicy->init_check (mpaudiopolicy); Step 4.
...
Step 5
Property_get ("ro.camera.sound.forced", Value, "0");
Forced_val = strtol (value,null, 0);
Mpaudiopolicy->set_can_mute_enforced_audible (Mpaudiopolicy,!forced_val);
Step 6.
if (Access (audio_effect_vendor_config_file, r_ok) = = 0) {
Loadpreprocessorconfig (Audio_effect_vendor_config_file);
} else if (Access (audio_effect_default_config_file, r_ok) = = 0) {
Loadpreprocessorconfig (Audio_effect_default_config_file);
}
}
We have divided the above code snippet into 6 steps to explain.
[Email protected] Audiopolicyservice::audiopolicyservice. With the hw_module_t of the audio policy, the policy implementation in the original ecosystem has two places, namely AUDIO_POLICY.C and Audio_policy_hal.cpp, which by default the system chooses the latter ( The corresponding library is libaudiopolicy_legacy)
[Email protected] Audiopolicyservice::audiopolicyservice. hw_module_t from the previous step opens the Audio policy device (this is not a traditional hardware device, but rather the policy virtual becomes a device.) The implementation of this method allows audio hardware vendors to develop their own audio strategy with a lot of flexibility. In the original ecological code, Audio_policy_dev_open called the [email protected]_policy_hal.cpp, the resulting policy device implementation is Legacy_ap_device
Step [email protected] audiopolicyservice::audiopolicyservice. Through the above audio policy device to generate a strategy, the corresponding implementation method is [email protected]_policy_hal.cpp. This function first generates a [email protected]_policy_hal.cpp, while mpaudiopolicy corresponds to Legacy_audio_policy::p olicy. In addition to this, Legacy_audio_policy also contains the following important member variables:
struct Legacy_audio_policy {
Structaudio_policy policy;
void *service;
Structaudio_policy_service_ops *aps_ops;
Audiopolicycompatclient *service_client;
Audiopolicyinterface *APM;
};
Where Aps_ops is a function pointer (aps_ops) provided by Audiopolicyservice, the function is the interface that Audiopolicyservice communicates with the outside world, and is often encountered later.
The last APM is shorthand for Audiopolicymanager, Audiopolicyinterface is its base class, and APM is an Audiopolicymanagerdefault object in the original ecological implementation, which is Create_ Created in Legacy_ap:
static int Create_legacy_ap (const struct Audio_policy_device*device,
Structaudio_policy_service_ops *aps_ops,
void *service,
struct Audio_policy **ap)
{
struct Legacy_audio_policy*lap;
...
LAP->APM =createaudiopolicymanager (lap->service_client);
...}
The function Createaudiopolicymanager, by default, corresponds to an implementation in AudioPolicyManagerDefault.cpp, so it returns a audiopolicymanagerdefault.
Do you feel more and more about policy-related classes? Then why do you need so many classes? Let's first look at the relationship between them:
Figure 13?18 The relationship of Audio policy related classes
It looks complicated, but in fact the following are the facts:
L Audiopolicyservice holds only an object that is similar to an interface class, that is, Audio_policy. In other words, Audiopolicyservice is a "shell", while Audio_policy is a plug-in that meets the requirements. The interface between the plug-in and the shell is fixed, but the internal implementation can be based on the manufacturer's own needs to do
L We know that audio_policy is actually a struct type in C that contains various function pointers, such as Get_output, Start_output, and so on. These function pointers, when initialized, need to point to a specific function implementation, which is the Ap_get_output, Ap_start_output, and so on in Audio_policy_hal.
The above-mentioned data types are more of a "shell", and the real implementation is audiopolicymanager. There are three more classes associated with this: Audiopolicyinterface is their base class, Audiopolicymanagerbase implements some basic strategies, and Audiopolicymanagerdefault is the final implementation class. In addition to Audiopolicyservice, the following two classes are also the focus of our research on audio policy.
Step [email protected] audiopolicyservice::audiopolicyservice. Initial detection, the implementation of the original ecological directly returned 0
Step [email protected] audiopolicyservice::audiopolicyservice. Determine whether to force the camera to take a picture sound
Step [email protected] audiopolicyservice::audiopolicyservice. To load the audio effects file (if one exists), the file path is as follows:
Audio_effect_default_config_file "/system/etc/audio_effects.conf"
Audio_effect_vendor_config_file "/vendor/etc/audio_effects.conf"
This audiopolicyservice completes the construction, and its registered name in ServiceManager is "Media.audio_policy". The Mpaudiopolicy variable that is included is the actual policy-maker, and it is created by the HAL layer, in other words, according to the hardware vendor's own "will" to execute the policy.
1.1.3 Audiopolicyservice Loading audio device
In the "Device Management" section of Audioflinger, we mentioned briefly that Audiopolicyservice will load the audio devices in the current system by parsing the configuration file. Specifically, an audio Policydevice (Mpaudiopolicydev) is created when the Audiopolicyservice is constructed and a audiopolicy (Mpaudiopolicy) is opened The default implementation of this policy is legacy_audio_policy::p olicy (data type Audio_policy). The Legacy_audio_policy also contains a AUDIOPOLICYINTERFACE member variable, which is initialized to a audiopolicymanagerdefault, which we analyzed in the previous section.
So when is audiopolicyservice going to load the audio device?
In addition to the late dynamic additions, another important approach is through the Audiopolicymanagerdefault's parent class, the Audiopolicymanagerbase constructor.
Audiopolicymanagerbase::audiopolicymanagerbase (audiopolicyclientinterface*clientinterface) ...
{mpclientinterface= clientinterface;
...
if (Loadaudiopolicyconfig (audio_policy_vendor_config_file)! = No_error) {
if (Loadaudiopolicyconfig (audio_policy_config_file)! = No_error) {
Defaultaudiopolicyconfig ();
}
}
for (size_t i = 0; i < mhwmodules.size (); i++) {
Mhwmodules[i]->mhandle = Mpclientinterface->loadhwmodule (mhwmodules[i]->mname);
if (Mhwmodules[i]->mhandle = = 0) {
Continue
}
for (size_t j = 0; j< mhwmodules[i]->moutputprofiles.size (); j + +)
{
Const IOPROFILE*OUTPROFILE = mhwmodules[i]->moutputprofiles[j];
if (Outprofile->msupporteddevices & mattachedoutputdevices) {
Audiooutputdescriptor*outputdesc = new Audiooutputdescriptor (outprofile);
Outputdesc->mdevice = (audio_devices_t) (Mdefaultoutputdevice &
Outprofile->msupporteddevices);
audio_io_handle_t output =mpclientinterface->openoutput (...);
...
}
Different Android products often differ in the design of audio, and using the form of a configuration file (audio_policy.conf) allows the manufacturer to easily describe the audio devices contained in its products, which have two storage paths:
#define Audio_policy_vendor_config_file "/vendor/etc/audio_policy.conf"
#define Audio_policy_config_file "/system/etc/audio_policy.conf"
If audio_policy.conf does not exist, the system will use the default configuration, implemented in Defaultaudiopolicyconfig. The following information can be read from the configuration file:
· What are the audiointerface, such as there is no "primary", "A2DP", "USB"
· The properties of each audiointerface. such as supported sampling_rates, formats, which device to support and so on. These properties are read in [email protected] and stored in the hwmodule->moutputprofiles. There may be several output and input under each audiointerface, and there are several specific support attributes under each output/input, as shown in the relationship:
Figure 13?19 Diagram of each element in audio_policy.conf
You can open a audio_policy.conf to understand the format requirements of this file, we do not do in-depth explanation here.
After reading the relevant configuration, the next thing you need to do is turn on these devices. Audiopolicyservice is only the strategy-makers, not the performers, so who will do the specific work? Yes, it must be audioflinger. We can see that there is a mpclientinterface variable in the above function segment, is it associated with Audioflinger? You can first analyze how this variable is coming from.
Obviously mpclientinterface this variable is initialized in the first line of the Audiopolicymanagerbase constructor, and then traced back, and can be found at the root of the Audiopolicyservice constructor function, The corresponding code statements are as follows:
RC =mpaudiopolicydev->create_audio_policy (Mpaudiopolicydev, &aps_ops, this, &mpaudiopolicy);
In this scenario, the function Create_audio_policy corresponds to CREATE_LEGACY_AP, and the incoming aps_ops is assembled into a Audiopolicycompatclient object, That is the object that Mpclientinterface points to.
In other words, Mpclientinterface->loadhwmodule actually calls the Aps_ops->loadhwmodule, which is:
Static audio_module_handle_t Aps_load_hw_module (Void*service,const char *name)
{
Sp<iaudioflinger> af= Audiosystem::get_audio_flinger ();
...
Returnaf->loadhwmodule (name);
}
Audioflinger finally appeared, the same situation applies to mpclientinterface->openoutput, the code is as follows:
Static audio_io_handle_t aps_open_output (...)
{
Sp<iaudioflinger> af= Audiosystem::get_audio_flinger ();
...
Return Af->openoutput (audio_module_handle_t) 0,pdevices, Psamplingrate, Pformat, Pchannelmask,
PLATENCYMS, flags);
}
Back to the Audiopolicymanagerbase constructor, there are two targets for the For loop:
Ø use Loadhwmodule to load the audio interface parsed from the audio_policy.conf, which is the element in the Mhwmodules array
Ø use Openoutput to open all output contained in each audio interface
With regard to the implementation of these two functions in Audioflinger, we have analyzed them in the previous section, and here we have finally strung them up. The audio configuration in the setup is resolved through Audiopolicymanagerbase,audiopolicyservice, and the entire audio system is deployed using the interface provided by Audioflinger. This provides the underlying support for the use of audio devices in the upper-level applications. In the next section, we'll look at the upper-level application specifically how to use this framework to play audio.
The Audiopolicyservice of Android audio system