I have previously reproduced an article-an overview of the smartphone audio system, which describes the design of the mobile phone audio system. In fact, it is a simple practice, and there are great limitations in the application. So what should a sound audio framework look like? In the past two days, based on some clues of the android4.0 source code, we found the relevant hardware materials and extracted them.
Note: The Samsung tuna solution (Galaxy Nexus) is used as an example.
Audio_hw
In the android audio system NOTE 4: The Hal of the 4.0 audio system, Samsung's tuna solution is actually the famous Galaxy Nexus.
Android-4.0.3_r1 \ device \ Samsung \ tuna \ audio \ audio_h1_c, this file is the tuna audio Hal, from which we can see: According to the upper audio policy to open/close the corresponding PCM device.
struct pcm_config pcm_config_mm = { .channels = 2, .rate = MM_FULL_POWER_SAMPLING_RATE, .period_size = LONG_PERIOD_SIZE, .period_count = PLAYBACK_LONG_PERIOD_COUNT, .format = PCM_FORMAT_S16_LE,};struct pcm_config pcm_config_mm_ul = { .channels = 2, .rate = MM_FULL_POWER_SAMPLING_RATE, .period_size = SHORT_PERIOD_SIZE, .period_count = CAPTURE_PERIOD_COUNT, .format = PCM_FORMAT_S16_LE,};struct pcm_config pcm_config_vx = { .channels = 2, .rate = VX_NB_SAMPLING_RATE, .period_size = 160, .period_count = 2, .format = PCM_FORMAT_S16_LE,};
1. MM: media playback device, that is, audio download link;
2. mm_ul: audio record device, that is, audio upload link;
3. VX: Voice device. The voice of the call module passes through this device.
Enable different PCM devices based on the upper-layer sound mode audio_mode_t. For details, see the select_mode () function.
Audio_hsf-c also defines various audio paths (for the concept of audio paths, see dapm 2: audio paths and dapm kcontrol ).
struct route_setting hf_output[] = { { .ctl_name = MIXER_HF_LEFT_PLAYBACK, .strval = MIXER_PLAYBACK_HF_DAC, }, { .ctl_name = MIXER_HF_RIGHT_PLAYBACK, .strval = MIXER_PLAYBACK_HF_DAC, }, { .ctl_name = NULL, },};struct route_setting hs_output[] = { { .ctl_name = MIXER_HS_LEFT_PLAYBACK, .strval = MIXER_PLAYBACK_HS_DAC, }, { .ctl_name = MIXER_HS_RIGHT_PLAYBACK, .strval = MIXER_PLAYBACK_HS_DAC, }, { .ctl_name = NULL, },};// ......
1. hf_output: headfree output path;
2. hs_output: headset output path;
3 ,......
Select to enable or disable the corresponding audio path component based on the upper-layer audio_devices_t, for example:
// ...headset_on = adev->devices & AUDIO_DEVICE_OUT_WIRED_HEADSET;headphone_on = adev->devices & AUDIO_DEVICE_OUT_WIRED_HEADPHONE;// ...set_route_by_array(adev->mixer, hs_output, headset_on | headphone_on);set_route_by_array(adev->mixer, hf_output, speaker_on);// ...
The above mode and device are determined by the Audio policy of the last layer of Android, which is not discussed in this module. Here, you only need to follow the audio policy to open the correct audio channel.
In addition, I am very confused: the sampling rate used in the audio_h1_c of tuna is 48 khz, but the android framework layer is SRC to 44.1khz first, therefore, this would make Qualcomm the same mistake-double SRC (48 khz-> 44.1 kHz-> 48 khz), which seriously damages the sound quality. Why not directly use the 44.1khz sampling rate?
Kernel
As we know, Galaxy Nexus uses omap4460 and the audio chip is twl6040. Therefore, we download the OMAP kernel code:
$ git clone https://android.googlesource.com/kernel/omap.git
See: http://source.android.com/source/downloading.html
As for the content of this article, we only need to pay attention to the following source files:
1. Sound \ SOC \ codecs \ twl6040.c
2. Sound \ SOC \ OMAP \ omap-abe-dsp.c and sound \ SOC \ OMAP \ omap-abe.c
Twl6040.c is the driving code of the audio chip twl6040, which is common;
OMAP-Abe is the omap4460 audio back-end driver code, which is platform-related. Based on the subsequent hardware diagram, we will understand that audio_hw directly controls Abe to a large extent.
Among them, the omap dsp code is provided in the form of firmware, so the omap-abe-dsp.c is used to call the DSP interface function.
Hardware digoal
The omap4460 Data Manual and design documents are as follows:
Datasheet: http://www.ti.com/general/docs/wtbu/wtbuproductcontent.tsp? Templateid = 6123 & navigationid = 12843 & contentid = 53243
Abe: http://focus.ti.com/pdfs/wtbu/OMAP4430_ES2%20x_4460_ES1%200_PUBLIC_TRM_Addendum_ABE_HAL_vC.pdf
The audio system diagram is as follows:
On the left is the OMAP Abe and on the right is the codec twl4060. It can be seen that the audio is first processed by the OMAP Abe and then sent to the codec output.
Downlink call path: Abe [vx_dl-> dl_mixer->...]-> pdm_dl-> codec [DAC-> earpiece/headfree/Headset]
Some EQ, gain, SRC, and echo components are required on the Abe side, which are not listed here. It can be seen that OMAP Abe is a very complicated module, and the sound is sent to codec after being processed here. In contrast, the work on the codec end is much simpler.