Author:
Cpuwolf
Reprinted: http://blog.csdn.net/cpuwolf/article/details/4686830
There are very few Chinese materials about ALSA (Advanced Linux sound architecture), probably because few Chinese people have started driving development from scratch. After all, companies like Wolfson and RealTek are mostly engaged in development by foreigners. ALSA's support for SOC is more of an application of ALSA in the embedded field. It was later added to ALSA, and less information is required. It took me a week to discover nothing, and then I finally felt like ALSA through the clues of sporadic materials. It is very helpful for future study.
Generally, the three layers are classified as ALSA driver, ALSA Lib, and ALSA application.
ALSA applications include aplay and arecord. They are ALSA utils tools. These tools are very good for testing drivers.
ALSA lib provides function libraries for enabling, disabling, and playing.
The ALSA driver was later added with support for SOC, and abstracted the control of hardware audio codec and put it in the directory of sound/soc/codecs. The Code related to the SoC hardware is abstracted and stored in the OMAP directory (added to the SOC is OMAP ). This separation design allows a codec code to correspond to a lot of SOC without any modification. (The goal of driver architecture design is to abstract general code as much as possible)
One of the big guys in ALSA's SOC is dapm (Dynamic Audio Power Management), which divides the power supply into four domains:
- Codec domain-vref, vmid (core codec and audio power) usually controlled at codec probe/remove and suspend/resume, although can be set at stream time if power is not needed for sidetone, etc.
- Platform/machine domain-physically connected inputs and outputs is platform/machine and user action specific, is configured by the machine driver and responds to asynchronous events e. g when HP are inserted
- Path domain-audio susbsystem Signal paths automatically set when mixer and MUX settings are changed by the user. e.g. alsamixer, amixer.
- Stream domain-DACS and ADCs. enabled and disabled when stream playback/capture is started and stopped respectively. e.g. aplay, arecord.
For codec domain, it is located in the sound/soc/codecs directory, for example, wm9713.c.
For platform/machine domain, it is placed in the machine name directory, for example, neo1973_wm9753.c under the sound/soc/s3c24xx directory. The name is the same as the machine name and CODEC name.
When defining the audio path of codec, you need to know this background. dapm is divided into the following parts in concept,
- Mixer-mixes several analog signals into a single analog signal.
- MUX-an analog switch that outputs only one of your inputs.
- PGA-A Programmable Gain Amplifier or attenuation widget.
- ADC-analog to digital converter
- DAC-digital to Analog Converter
- Switch-an analog switch
- Input-A codec Input Pin
- Output-A codec output pin
- Headphone-headphone (and optional Jack)
- Mic-mic (and optional Jack)
- Line-line input/output (and optional Jack)
- Speaker-speaker
- Pre-special pre widget (Exec before all others)
- Post-special post widget (Exec after all others)
It can be seen that these components are also abstracted from the hardware components inside codec. Audio path is a path, so it has the concept of input and output. Except for the end point of the path, each part contains an input port and an output port.
For the start endpoint input on the path, it only needs to output to the next part. Similarly, for the output endpoint on the path, it only needs to input from the previous part. Snd_soc_dapm_input or snd_soc_dapm_output is used to define the header and end endpoint of path.
There is a mix in codec. It always adds multiple inputs to an output signal, which is defined by snd_soc_dapm_mixer.
Similarly, codec also has multiple switches, which have multiple inputs, but only one input can be connected to the output. Such components are defined by snd_soc_dapm_mux.
PGA is like its name in hardware. It is generally an input and an output. Defined by snd_soc_dapm_pga
OK. In your CODEC code, all possible paths are connected. The problem is that every audio path has a head and a tail. But there are a lot of paths that can be taken in the middle of this path, which is not necessarily true. Especially in mobile phone scenarios, the audio path can include the speaker path, headphone path, a2dp path, gsm phone path, or mic path, open or close these paths as needed. I have been confused by this question for a long time. Which part of ALSA's design is responsible. At first, I suspected that codec's audio path was not fully defined? Or does ALSA have a function to switch to this scenario? Or is this path defined by the machine code?
I guess it was not correct. Audio Codec has many parts and can be any name. How can ALSA know how to operate these parts to switch to the desired path? in real time, ALSA lib does not care about this part, these minor switches are completed by ALSA application and later. ALSA lib also provides a maximum of mixer or MUX function operations. If the switchover is your application.
The part operation function is snd_mixer_selem_set_enum_item (). alsamixer also achieves switching through this function.