Brief Introduction to Linux audio driver

Source: Internet
Author: User

I. Digital Audio

The audio signal is a continuous analog signal, but the computer can only process and record binary digital signals. The audio signals obtained from natural audio sources must undergo certain transformations, the digital audio signal can be sent to the computer for further processing.

The digital audio system converts the acoustic wave to a series of binary data to reproduce the original sound, the device that implements this step is often referred to as A/D ). The A/D converter samples sound waves at tens of thousands of times per second. Each sampling point records the state of the original analog sound waves at A certain time point, which is usually called A sample ), the number of samples per second is called the sampling frequency. By connecting a series of continuous samples, you can describe a piece of sound in a computer. For each sample in the sampling process, the digital audio system allocates a certain storage location to record the amplitude of the acoustic wave. Generally, this is called the sampling resolution or sampling accuracy. The higher the sampling accuracy, the more delicate the sound is to be restored.

There are many concepts involved in digital audio. For programmers who program audio in Linux, the most important thing is to understand the two key steps of audio digitization:Sampling and quantification. Sampling refers to reading the amplitude of the sound signal at a certain time, while quantization refers to converting the amplitude of the sampled sound signal to a digital value. In essence, sampling is time digitalization, quantification is the digitization of magnitude.

The following describes several technical indicators that are frequently used in audio programming:

Sampling frequency

Sampling frequency refers to the number of times the acoustic amplitude sample is extracted per second when the analog sound waveform is digitalized. The selection of sampling frequency should follow the northehiernich sampling theory: If a simulated signal is sampled, the highest signal frequency that can be restored after sampling is only half of the sampling frequency, or as long as the sampling frequency is twice the highest frequency of the input signal, the original signal can be reconstructed from the sampling signal series. The normal human hearing frequency ranges from 20Hz ~ Between 20 kHz, according to The nequest sampling theory, in order to ensure sound distortion, the sampling frequency should be around 40 kHz. Common audio sampling frequencies include 8 kHz, 11.025 kHz, 22.05 kHz, 16 kHz, 37.8 kHz, 44.1 kHz, and 48 kHz, it can also achieve the sound quality of the DVD.

Quantified bits

The number of quantifies the amplitude of the analog audio signal, which determines the dynamic range after the analog signal is digitalized. Commonly Used 8-bit, 12-bit, and 16-bit. The higher the quantizing bit, the larger the dynamic range of the signal. The closer the digital audio signal is to the original signal, the larger the storage space required.

Channels

The number of audio channels is another important factor that reflects the digital quality of audio. audio channels can be either single or dual. Dual-channel, also known as stereo, has two lines in the hardware. Both the sound quality and the tone are superior to that of the single-channel, but the storage space occupied after digitalization is twice the size of the single-channel.

Ii. Sound Card drive

For security considerations, Linux applications cannot directly operate hardware devices such as sound cards, but must use the driver provided by the kernel. The essence of audio programming in Linux is to use the driver to complete various operations on the sound card. Hardware control involves the operation of each bit in the register. Generally, this operation is directly related to the device and has strict timing requirements. If these operations are handled by application programmers, the sound card programming will become very complex and difficult, and the role of the driver is to shield these underlying hardware details, so as to simplify the compilation of applications.

Currently, two types of Sound Card Drivers are commonly used in Linux:OSSAndALSA.

The biggest difference between ALSA and OSS is that ALSA is a free Project maintained by volunteers, while OSS is a commercial product provided by the company, therefore, OSS is better than ALSA in terms of hardware adaptation, and it supports more sound card categories. Although ALSA is not widely used as OSS, it has more friendly programming interfaces and is fully compatible with OSS. It is undoubtedly a better choice for application programmers.

Iii. Linux OSS audio device driver

3.1 composition of OSS driver

The OSS standard has two basic audio devices: mixer and DSP ).

In the sound card hardware circuit, mixer is a very important component. Its function is to combine or combine multiple signals. For different sound cards, the roles of the mixer may be different. In the OSS driver, the/dev/mixer device file is the software interface for the application to operate on mixer.

The mixer circuit consists of two parts: input mixer and output mixer ). The input mixer is responsible for receiving analog signals from multiple different signal sources, which are sometimes called sound mixing channels or sound mixing devices. After the analog signal is modulated by the gain controller and the volume regulator controlled by software, it is modulated in different sound mixing channels and then sent to the input mixer for sound synthesis. The electronic switch on the mixer can control which channels have signals connected to the mixer. Some sound cards allow only one sound mixing channel to be used as the sound source of the recording, some sound cards allow arbitrary connections to sound mixing channels. The signal processed by the input mixer is still A analog signal, which will be sent to the/D converter for digital processing.

The operating principle of the output mix is similar to that of the input mix. Multiple signal sources are also connected to the mix, and the gain is adjusted in advance. When the output mixer blends all analog signals, there is usually a general control gain regulator to control the size of the output sound. In addition, there are some tone controllers to adjust the tone of the output sound. The signals processed by the output mixer are also analog signals, which are finally sent to the speakers or other analog output devices. The programming of the mixer includes how to set the gain controller level and how to switch between different audio sources. These operations are usually not continuous, it does not occupy a large amount of computer resources like recording or playing. Because the mixed speaker operation does not conform to the typical read/write operation mode, most of the operations except open () and close () are performed through ioctl () system Call. Unlike/dev/dsp,/dev/mixer allows simultaneous access by multiple applications, and the set value of the mixer remains until the corresponding device file is closed.

A dsp, also known as a decoder, implements both recording and playing. The corresponding device file is/dev/dsp or/dev/sound/dsp. The/dev/dsp provided by the OSS audio card driver is A device file used for digital sampling and digital recording. Writing data to the device means activating the D/A converter on The Sound Card for sound releasing, reading data from the device means that the/D converter on The Sound Card is activated for recording.

When reading data from a dsp device, the analog signal input from the sound card goes through the/D converter to A digital sample, which is stored in the kernel buffer of the sound card driver, when the application reads data from the sound card through the read () system call, the digital sampling results stored in the kernel buffer will be copied to the user buffer specified by the application. It should be noted that the sampling frequency of the sound card is determined by the driver in the kernel, not by the speed at which the application reads data from the sound card. If the application reads data too slowly, which is lower than the sampling frequency of the sound card, the excess data will be discarded (that is, overflow); If the reading speed is too fast, as a result, the sound card driver blocks the applications that request data until new data arrives.

When writing data to the DSP device, the digital signal is converted into A analog signal through the D/A converter, and then generates A sound. The speed at which the application writes data should be at least equal to the sampling frequency of the sound card. If it is too slow, the sound will be paused or paused (underflow ). If the user writes too fast, it will be blocked by the sound card driver in the kernel until the hardware has the ability to process new data.

Unlike other devices, sound cards do not usually support non-blocking I/O operations. Even if the kernel OSS Driver provides non-blocking I/O support, the user space should not be used.

Whether it is reading data from the sound card or writing data to the sound card, in fact, all have a specific format (format), such as unsigned 8-bit, single-channel, 8 KHz sampling rate, if the default value cannot meet the requirements, you can change them by calling the ioctl () system. Generally, after opening the device file/dev/dsp in an application, you should set an appropriate format for it before reading or writing data from the sound card.

3.2 mixer Interface

Int register_sound_mixer (structfile_operations * fops, int dev );

The above function is used to register one mixer. The fops parameter is the file operation interface, and the dev parameter is the device number. If-1 is entered, the system automatically assigns one device number. Mixer is a typical character device. Therefore, the main task of encoding is to implement functions such as open () and ioctl () in file_operations.

The most important function in the file_operations interface of mixer isIoctl ()It implements different IO control commands for the mixer.

3.3 DSP interface

Int register_sound_dsp (structfile_operations * fops, int dev );

The preceding function is similar to register_sound_mixer (). It is used to register one dsp device. The fops parameter is the file operation interface, and the dev parameter is the device number. If you enter-1, the system automatically assigns one device number. Dsp is also a typical character device, so the main task of coding is to implementRead (), write (), ioctl ()And other functions.

The read () and write () functions in file_operations of dsp interfaces are very important. The read () function obtains the recording data from the Audio Controller to the buffer zone and copies the data to the user space. write () the function copies audio data from the user space to the kernel space buffer and finally sends the data to the Audio Controller.

The ioctl () function in file_operations of dsp interface is used to set IO control commands for parameters such as sampling rate, quantization precision, and DMA buffer block size.

In the process of copying data from the buffer zone to the Audio Controller, DMA is usually used, which is very important to the sound card. For example, the driver sets the source data address (in-memory DMA buffer), Destination Address (Audio Controller FIFO), and DMA data length of the DMA controller during sound playback, the DMA controller automatically sends data in the buffer to fill in the FIFO until the corresponding data length is sent.

In the OSS driver, it is recommended to create a ring buffer for storing audio data. In addition, the OSS driver generally divides a large DMA buffer into several blocks of the same size (these blocks are also called "segments", namely, fragment ), the driver uses DMA to move one fragment between the sound buffer and the sound card each time. In user space, you can use the ioctl () System Call to adjust the block size and number.

In addition to read (), write (), and ioctl (), the poll () function of the dsp interface usually needs to be implemented to inform the user of whether the DMA buffer can be read or written.

During the initialization of the OSS driver, the system calls register_sound_dsp () and register_sound_mixer () to register the dsp and mixer devices. When the module is uninstalled, the system calls audio (audio_dev_dsp) and audio (audio_dev_mix ).

Shows the Linux OSS driver structure:

3.4 OSS user space programming

1. DSP Programming

Generally, the operations on DSP interfaces include the following steps:

① Open the device file/dev/dsp

You must specify the mode in which the sound card is operated when you open the device. For sound cards that do not support full duplex, you should enable them in read-only or write-only mode, only sound cards that support full duplex can be opened in read/write mode, which depends on the specific implementation of the driver. Linux allows applications to open or close the device file corresponding to the sound card multiple times, so that switching between the sound recording status and the recording status is convenient.

② If necessary, set the buffer size

The sound card driver running in the Linux Kernel maintains a buffer. Its size will affect the effect of sound releasing and recording. You can use ioctl () system call to set its size properly. It is not necessary to adjust the buffer size in the driver. If there are no special requirements, you can use the default buffer size. If you want to set the buffer size, it should usually be followed after the device file is opened, because other operations on the sound card may cause the driver to be unable to modify the buffer size.

③ Set the number of channels

You can set it to single-channel or stereo Based on the hardware devices and drivers.

④ Set the sampling format and sampling frequency

The sample formats include AFMT_U8 (8-bit without symbols), AFMT_S8 (8-bit with symbols), AFMT_U16_LE (small-end mode, 16-bit without symbols), and AFMT_U16_BE (large-end mode, 16-bit without symbols), AFMT_MPEG, AFMT_AC3, etc. You can use the SNDCTL_DSP_SETFMT IO control command to set the sampling format.
For most sound cards, the supported sampling frequency ranges from 5 kHz to 44.1kHz or 48 kHz, but this does not mean that all the continuous frequencies in this range will be supported by hardware, in Linux, the most common sampling frequencies for audio programming are 11025Hz, 16000Hz, 22050Hz, 32000Hz, and 44100Hz. You can use the SNDCTL_DSP_SPEED IO control command to set the sampling frequency.

⑤ Read/write/dsp playback or recording

2. mixer Programming

The sound card mixer consists of multiple sound mixing channels, which can be programmed through the device file/dev/mixer provided by the driver.

Adjusting the input gain and output gain of the sound card is a major function of the mixer. Currently, most sound cards use an 8-or 16-bit gain controller, the sound card driver converts them into percentages. That is to say, the value ranges from 0 ~ 100.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.