These days we have dramatically modified our ALSA audio driver and have a deeper understanding of the importance of mechanism and policy separation.
To put it bluntly, Linux device drivers has gone through countless times, but ignores one sentence in the Introduction: differentiation mechanisms and policies are one of the best ideas behind UNIX design.
"What features are required" is a mechanism, and "how to use these features" is a policy. More than a year ago, when I first took over audio driver development, I confused the mechanisms and strategies, mainly in audio channels.
The most typical example is: When codec detects that a headset is inserted, it turns off the speaker and the voice turns to the headset output. When it detects that the headset is unplugged, the headset channel is closed and the voice turns to the speaker output.
I thought this was a correct scenario. However, some special scenarios have not been taken into account: for example, when headset is inserted, if it is an alarm or phone call, the sound must be speaker.
Various application scenarios are so strange that the underlying policies cannot be assumed to be correct, and they should not be implemented without authorization. You only need to provide relevant functions. The upper layer calls these interfaces as needed.
Or the preceding audio channel is used as an example:
1. the codec driver implements snd_kcontrol (mixer control), dapm, audio_map, and so on, as shown in figure
SOC_SINGLE("Headset Switch", xxx_HEADSET_REG, 0, 1, 0),SOC_SINGLE("Headfree Switch", xxx_HEADFREE_REG, 0, 1, 0),
The preceding two mixer controls can be used to control the headset and headfree channels.
2. When the Android system runs, you can use the tinymix command (android4.0 version, if it is a previous version, use the alsa_amixer command) to see:
17 BOOL 1 Headset Switch Off18 BOOL 1 Headfree Switch Off
Because no sound needs to be output at present, both channels are closed.
3. In the audio abstraction layer of Android, audio_hsf-c can be defined as follows:
struct route_setting hf_output[] = { { .ctl_name = "Headfree Switch", .intval = 1, }, /* end of the route_setting */ { .ctl_name = NULL, },};struct route_setting hs_output[] = { { .ctl_name = "Headset Switch", .intval = 1, }, /* end of the route_setting */ { .ctl_name = NULL, }, };
In this way, the program can control the underlying audio path based on the audio policy provided by audiopolicyservice:
headset_on = adev->devices & AUDIO_DEVICE_OUT_WIRED_HEADSET; headphone_on = adev->devices & AUDIO_DEVICE_OUT_WIRED_HEADPHONE; speaker_on = adev->devices & AUDIO_DEVICE_OUT_SPEAKER; earpiece_on = adev->devices & AUDIO_DEVICE_OUT_EARPIECE; bt_on = adev->devices & AUDIO_DEVICE_OUT_ALL_SCO; /* select output stage */ set_route_by_array(adev->mixer, hs_output, headset_on | headphone_on); set_route_by_array(adev->mixer, hf_output, speaker_on);