Linux Audio Programming Guide (GO)

Source: Internet
Author: User
Tags bitmask

Transferred from: http://www.ibm.com/developerworks/cn/linux/l-audio/

Linux Audio Programming Guide

Although the advantages of Linux are mainly embodied in the network services, but in fact, there is also a very rich media function, this article is based on multimedia applications of the most basic sound as an object, how to develop the actual audio applications under the Linux platform, and also some common audio programming framework.

1 reviews:

Xiaowenpeng ([email protected]), free software enthusiast

February 01, 2004

    • Content
First, digital audio

Audio signal is a continuous change of the analog signal, but the computer can only process and record binary digital signal, from the natural sound source audio signal must undergo a certain transformation, become a digital audio signal, can be sent to the computer for further processing.

The digital audio system realizes the reproduction of the original sound by converting the waveform of the sound wave into a series of binary data, which is often referred to as the analog-to-digital converter (A/D). The A/D converter samples The sound waves at a rate of tens of thousands of times per second, each of which records the state of the original analog sound at a given moment, often referred to as a sample, and the number of samples per second is called the sampling frequency, by connecting a series of successive samples. You can describe a sound in your computer. For each sample in the sampling process, the digital audio system allocates a certain storage bit to record the amplitude of the sound wave, commonly referred to as the sampling resolution or sampling accuracy, the higher the sampling accuracy, the more delicate the sound will be restored.

Digital audio involves a lot of concepts, and the most important thing for programmers who do audio programming under Linux is to understand the two key steps of sound digitization: sampling and quantification. Sampling is the amplitude of a sound signal at a certain time, while quantization is the conversion of the sampled sound signal amplitude to a digital value, in essence, the sampling is the digitization of time, and quantization is the magnitude of the digitization. Here are a few of the technical indicators that are often needed for audio programming:

    1. Sampling frequency
      The sampling frequency is the number of times per second that the acoustic amplitude sample is sampled when the analog sound waveform is digitized. The selection of sampling frequency should follow the Nyquist (Harry Nyquist) Sampling theory: If an analog signal is sampled, the highest signal frequency that can be restored after sampling is only half of the sampling frequency, or the original signal can be reconstructed from the sampled signal series as long as the sampling frequency is twice times higher than the maximum frequency of the input signal. The frequency range of normal hearing is approximately between 20hz~20khz, according to the Nyquist sampling theory, in order to ensure that the sound is not distorted, the sampling frequency should be around 40kHz. Commonly used audio sampling frequency 8kHz, 11.025kHz, 22.05kHz, 16kHz, 37.8kHz, 44.1kHz, 48kHz, etc., if the use of higher sampling frequency, but also to achieve the sound quality of the DVD.
    2. Quantify the number of digits
      The quantization digit is digitized to the amplitude of the analog audio signal, it determines the dynamic range after the analog signal digitization, commonly has 8-bit, 12-bit and 16-bit. The higher the quantization bit, the greater the dynamic range of the signal, the more likely the digitized audio signal will be close to the original signal, but the more storage space is needed.
    3. Number of channels
      The number of channels is another important factor that reflects the digital quality of audio, with mono and two-channel points. The two-channel, also known as stereo, has two lines in the hardware, with better sound quality and timbre than mono, but the amount of storage space occupied by digitizing is one-fold more than mono.

Back to top of page

Second, sound card driver

For security reasons, applications under Linux cannot operate directly on a hardware device such as a sound card, but must pass through a kernel-supplied driver to complete it. The essence of audio programming on Linux is to use a driver to complete various operations on the sound card.

The control of the hardware involves the operation of each bit in the register, usually this is directly related to the device and the requirements of the timing is very strict, if the work is left to the application staff, then the programming of the sound card will become extremely complex and difficult, the role of the driver is to shield the hardware of these underlying details, This simplifies the writing of applications. There are two main types of sound card drivers commonly used under Linux: OSS and ALSA.

The first audio programming interface that appears on Linux is the OSS (Open Sound System), which consists of a complete set of kernel driver modules that can provide a unified programming interface for most sound cards. OSS has a relatively long history, and some of these kernel modules (oss/free) are distributed free of charge with the Linux kernel source, while others are provided in binary form by 4Front technologies. With the support of commercial companies, OSS has become the de facto standard for audio programming under Linux, and OSS-enabled applications can work well on most sound cards.

Although the OSS is already very mature, but it is a commercial product without fully open source code, ALSA (Advanced Linux sound Architecture) just makes up for this gap, it is the audio programming under Linux, another alternative driver. In addition to providing a set of kernel driver modules like OSS, ALSA provides a library of functions specifically designed to simplify application writing, and the ALSA library is more convenient to use than the original IOCTL-based programming interface provided by OSS. The main features of ALSA are:

    • Supports multiple sound card devices
    • Modular Kernel Drivers
    • Supports SMP and multithreading
    • Provides library of application development functions
    • Compatible with OSS applications

The biggest difference between ALSA and OSS is that ALSA is a free project maintained by volunteers, while OSS is a commercial product provided by the company, so the degree of adaptation to the hardware is better than ALSA, and it can support more types of sound cards. Alsa, although not as widely used as OSS, but has a more friendly programming interface, and fully compatible with OSS, is undoubtedly a better choice for application programmers.

Back to top of page

Three, programming interface

How to operate a variety of audio devices is the key to audio programming on Linux, and through a set of system calls provided by the kernel, applications can access the various audio device interfaces provided by the sound card driver, which is the simplest and most straightforward method for audio programming under Linux.

3.1 Accessing Audio Devices

Both OSS and ALSA run as kernel drivers in the Linux kernel space, and applications that want to access the hardware device of the sound card must rely on the system call provided by the Linux kernel. From the programmer's point of view, the operation of the sound card is to a large extent equivalent to the operation of the disk file: First use the open system call to establish a connection with the hardware, the returned file descriptor will be the identity of the subsequent operation, and then use the read system call from the device to receive data, or write the data to the device using the write system call, and all other operations that do not conform to the basic pattern of read/write can be done by the IOCTL system call, and finally, using the close system call to tell the Linux kernel no further processing of the device.

  • Open System call
    The system calls open to gain access to the sound card, as well as to prepare for subsequent system calls, the function prototype is as follows:
    int open (const char *pathname, int flags, int mode);

    The parameter pathname is the name of the device file that will be opened and is generally/dev/dsp for the sound card. The parameter flags is used to indicate how the device file should be opened, which can be o_rdonly, o_wronly, or O_RDWR, respectively, to open the device file in a read-only, write-only, or read-write manner, and the parameter mode is usually optional, It will only be used if the specified device file does not exist, indicating what permissions the newly created file should have.
    If the open system call completes successfully, it returns a positive integer as the file identifier, which is required for subsequent system calls. If the open system call fails, it returns-1, and also sets the global variable errno, indicating what caused the error to occur.
  • Read system call
    The system invokes read to read the data from the sound card, and its function prototype is as follows:
    int read (int fd, char *buf, size_t count);

    The parameter fd is the identifier of the device file, obtained by the previous open system call, and the parameter buf is a character pointer to the buffer, which is used to hold the data obtained from the sound card, and the parameter count is used to limit the maximum number of bytes obtained from the sound card. If the read system call completes successfully, it returns the number of bytes actually read from the sound card, usually less than the value of count, and if the read system call fails, it returns 1, and the global variable errno is set to indicate what caused the error to occur.
  • Write system call
    System call write is used to write data to the sound card, and its function prototype is as follows:
    size_t write (int fd, const char *BUF, size_t count);

    The system call to write and the system call to read are largely similar, except that write writes data to the sound card, while read reads the data from the sound card. The parameter FD is also the identifier of the device file, which is also obtained by the previous open system call; The parameter buf is a character pointer to the buffer, which holds the data that is about to be written to the sound card, and the parameter count is used to limit the maximum number of bytes written to the sound card.
    If the write system call completes successfully, it returns the actual number of bytes written to the sound card, and if the read system call fails, it returns-1, and the global variable errno is set to indicate what caused the error to occur. Whether read or write, once invoked, the Linux kernel blocks the current application until the data is successfully read from or written to the sound card.
  • IOCTL system call
    The system calls the IOCTL can control the sound card, generally, the operation of the device file does not conform to the read/write basic mode, is done through the IOCTL, it can affect the behavior of the device, or return the state of the device, its function prototype is as follows:
    int ioctl (int fd, int request, ...);

    The parameter fd is the identifier of the device file, which is obtained when the device is opened, and if the device is more complex, the control request for it will be a number of different, the purpose of the parameter request is to distinguish the various control requests, generally speaking, in the control of the device requires additional parameters, This is determined according to the different control requests and may be directly related to the hardware device.
  • Close System call
    When the application finishes using the sound card, it needs to be closed with the close system call in order to free up the hardware resources that are consumed, and its function prototype is as follows:
    int close (int fd);

    The parameter fd is the identifier for the device file, which is obtained when the device is opened. Once the application calls the close system call, the Linux kernel frees the resources associated with it, so it is recommended to close the open device as quickly as possible when it is not needed.
3.2 Audio Device files

For Linux application programmers, the audio programming interface is actually a set of audio device files that can be used to read data from the sound card, write data to the sound card, control the sound card, set the sampling frequency and number of channels, and so on.

  • /dev/sndstat
    Device file/dev/sndstat is the simplest interface provided by a sound card driver, usually a read-only file, and is only limited to reporting the current state of the sound card. Generally speaking,/dev/sndstat is provided to the end user to detect the sound card, is not suitable for the program, because all the information can be obtained through the IOCTL system call. The cat command provided by Linux makes it easy to get the current status of the sound card from/dev/sndstat: [[email protected] sound]$ Cat/dev/sndstat
  • /DEV/DSP

    The/DEV/DSP provided by the sound card driver is a device file for digital sampling (sampling) and digital recording (recording), which is important for audio programming under Linux: Writing data to the device means activating the D/a converter on the sound card for playback, Reading the data to the device means activating the A/D converter on the sound card for recording. Many sound cards are now available with multiple digital sampling devices that can be accessed under Linux through device files such as/DEV/DSP1.

    DSP is the abbreviation of digital Signal Processor, which is a special chip used for digital signal processing, and the sound card uses it to realize the conversion of analog signal and digital signal. The DSP device in the sound card actually consists of two components: the ability to use a/D converter for input of sound when opened as read-only, and the ability to use a D/a converter for sound output when opened in a write-only manner. Strictly speaking, applications under Linux either open the/DEV/DSP input sound as read-only, or open the/DEV/DSP output sound in a write-only manner, but in fact some sound card drivers still allow read-write to open/DEV/DSP for simultaneous sound input and output. This is critical for some applications, such as IP telephony.

    When reading from a DSP device, the analog signal entered from the sound card is transformed into a digital sample sample (sample), stored in the kernel buffer of the sound card driver, and when the application reads the data from the sound card through a read system call. The numeric sample results that are saved in the kernel buffer are copied to the user buffer specified by the application. It should be noted that the sound card sampling frequency is determined by the driver in the kernel, not the speed at which the application reads the data from the sound card. If the application reads data too slowly to lower the sample frequency of the sound card, the excess data is discarded, and if the data is read too fast to the sound card's sample rate, the sound card driver blocks the application that requests the data until the new data arrives.

    When writing data to a DSP device, the digital signal goes through a D/a converter into an analog signal and generates a sound. The speed at which the application writes the data should also match the sample frequency of the sound card, or if it is too slow, there will be pauses or pauses in the sound, which will be blocked by the sound card driver in the kernel until the hardware has the ability to process the new data. Unlike other devices, sound cards typically do not support non-blocking (non-blocking) I/O operations.

    Whether you are reading data from a sound card or writing data to a sound card, in fact you have a specific format (format), which defaults to 8-bit unsigned data, mono, 8KHz sample rate, and can be changed by the IOCTL system call if the default values are not met. In general, after you open the device file/DEV/DSP in your application, you should set it up in the appropriate format before you can read or write data from the sound card.

  • /dev/audio
    /dev/audio is similar to/DEV/DSP, which is compatible with audio devices on Sun workstations, using Mu-law encoding. If your sound card driver provides support for/dev/audio, you can use the Cat command on Linux to play an audio file encoded with Mu-law on a Sun workstation:
    [email protected] sound]$ cat audio.au >/dev/audio

    Because the device file/dev/audio is primarily for compatibility reasons, it is best not to try it in a newly developed application, but instead to replace it with/DEV/DSP. For applications, only one of the/dev/audio or/DEV/DSP can be used at the same time, because they are different software interfaces for the same hardware.
  • /dev/mixer
    In the hardware circuit of the sound card, the mixer (mixer) is a very important part, it is the role of the combination of multiple signals or stacked together, for different sound cards, the role of their mixer may vary. A sound card driver running in the Linux kernel typically provides the/dev/mixer device file, which is the software interface that the application operates on the mixer. Mixer circuits typically consist of two parts: the input mixer (inputs mixer) and the output mixer. The
    Input mixer is responsible for receiving analog signals from a number of different sources, sometimes referred to as mixing channels or mixing devices. After the analog signal is modulated by the gain controller and the volume regulator controlled by the software, the level modulation is performed in different mixing channels and then sent to the input mixer for sound synthesis. The electronic switch on the mixer controls which channels have a signal connected to the mixer, and some sound cards allow only one mix channel to be used as the sound source for the recording, while some sound cards allow any connection to the mix channel. The signals processed by the input mixer are still analog, and they are sent to A/D converter for digitizing. The
    output mixer works like an input mixer, and there are multiple sources connected to the mixer, which have been adjusted for gain beforehand. When the output mixer mixes all analog signals, there is usually a master gain adjuster to control the output sound size, and there are some tone controllers to adjust the tone of the output sound. The signals that are processed by the output mixer are also analog signals, which are eventually sent to the speakers or other analog output devices. The programming of the mixer includes how to set the level of the gain controller and how to switch between different sources, which are usually discontinuous and do not require a lot of computer resources like recording or playback. Because the mixer operation does not conform to the typical read/write operation pattern, most operations are done through the IOCTL system call, in addition to the open and close two system calls. Unlike/DEV/DSP,/dev/mixer allows multiple applications to be accessed at the same time, and the mixer's settings persist until the corresponding device file is closed.
    In order to simplify the design of the application, most of the sound card drivers on Linux support the use of the mixer's IOCTL operations directly on the sound device, that is, if the/DEV/DSP is already open, then you do not have to open the/dev/mixer to operate the mixer, Instead, you can set the mixer directly with the file identifier that you get when you open/DEV/DSP.
  • /dev/sequencer
    Currently, most sound card drivers also provide/dev/sequencer, a device file used to operate a wave-table synthesizer built into a sound card, or to control instruments on a MIDI bus, typically used only in computer music software.

Back to top of page

IV. Application Framework

When programming for audio under Linux, the focus is on how to properly manipulate the various device files provided by the sound card driver, and since there are more concepts and factors involved, following a common framework will undoubtedly help simplify the design of your application.

4.1 DSP Programming

The first thing to do when programming a sound card is to open the corresponding hardware device, which is done using the open system call, and in general the/DEV/DSP file is used. What mode to operate the sound card must also be specified when the device is turned on, for a sound card that does not support full duplex, should be opened in a read-only or write-only way, only those who support full-duplex sound card can be opened in a read-write manner, and also rely on the specific implementation of the driver. Linux allows applications to open or close the device files corresponding to the sound card multiple times, which makes it easy to switch between the playback state and the recording state, and it is recommended that you open the device file in a read-only or write-only manner whenever possible in audio programming, as this will not only make full use of the hardware resources of the sound card , but it also facilitates the optimization of the driver program. The following code demonstrates how to open a sound card for playback (playback) operation by writing it in a write-only manner:

int handle = open ("/DEV/DSP", o_wronly), if (handle = =-1) {perror ("open/dev/dsp"); return-1;}

The sound card driver running in the Linux kernel is dedicated to maintaining a buffer whose size affects the playback and recording effects, and the IOCTL system call can be used to properly set its size. It is not necessary to adjust the buffer size in the driver, and if there is no special requirement, the default buffer size will be used. However, it is important to note that the buffer size setting should usually be followed immediately after the device file is opened, because other actions on the sound card can cause the driver to no longer be able to modify its buffer size. The following code demonstrates how to set the size of a kernel buffer in a sound card driver:

int setting = 0xnnnnssss;int result = IOCTL (handle, sndctl_dsp_setfragment, &setting); if (result = =-1) {perror ("ioctl Buffer size "); return-1;} Check the correctness of the set value

When setting the buffer size, the parameter setting actually consists of two parts, the lower 16 bits indicate the size of the buffer, the corresponding calculation formula is Buffer_size = 2^ssss, the even if parameter setting low 16 bit value is 16, Then the corresponding buffer size will be set to 65536 bytes. The high 16 bits of the parameter setting are used to indicate the maximum sequence number of the Shard (fragment), which ranges from 21 to 0x7fff, where 0x7fff means there are no restrictions.

The next thing to do is to set the number of channels (channel) when the sound card is working, depending on the hardware device and driver, you can set it to 0 (mono, mono) or 1 (stereo, stereo). The following code demonstrates how to set the number of channels:

int channels = 0; 0=mono 1=stereoint result = IOCTL (handle, Sndctl_dsp_stereo, &channels); if (result = =-1) {perror ("IOCTL channel Number "); return-1;} if (channels! = 0) {//Only stereo} supported

The sampling format and sampling frequency are another issue to be considered in audio programming, and all the sampling formats supported by the sound card can be found in the header file Soundcard.h, while the current sampling format can be easily changed by the IOCTL system call. The following code demonstrates how to set the sample format for a sound card:

int format = Afmt_u8;int result = IOCTL (handle, SNDCTL_DSP_SETFMT, &format); if (result = =-1) {perror ("IOCTL sample Format "); return-1;} Check the correctness of the set value

The sound card sampling frequency setting is also very easy, simply set the value of the second parameter to Sndctl_dsp_speed when the IOCTL is called, and specify the value of the sampling frequency in the third parameter. For most sound cards, the sampling frequency range is generally 5kHz to 44.1kHz or 48kHz, but does not mean that all frequencies within the range will be supported by hardware, and the most commonly used sampling frequencies for audio programming under Linux are 11025Hz, 16000Hz, 22050Hz, 32000Hz and 44100Hz. The following code demonstrates how to set the sample frequency for a sound card:

int rate = 22050;int result = IOCTL (handle, Sndctl_dsp_speed, &rate); if (result = =-1) {perror ("IOCTL sample format" ); return-1;} Check the correctness of the set value
4.2 Mixer Programming

The mixer on the sound card consists of multiple mixing channels, which can be programmed by the device file/dev/mixer provided by the driver. The operation of the mixer is done through the IOCTL system call, and all control commands start with sound_mixer or mixer, and table 1 lists several commonly used mixer control commands:

name function
Sound_mixer_volume Master Volume adjustment
Sound_mixer_bass Bass Control
Sound_mixer_treble Treble control
Sound_mixer_synth FM Synthesizer
Sound_mixer_pcm Master d/A converters
Sound_mixer_speaker PC Speaker
Sound_mixer_line Audio Line Input
Sound_mixer_mic Microphone input
Sound_mixer_cd CD input
Sound_mixer_imix Playback volume
Sound_mixer_altpcm From d/A converters
Sound_mixer_reclev Recording volume
Sound_mixer_igain Input gain
Sound_mixer_ogain Output gain
Sound_mixer_line1 1th input of the sound card
Sound_mixer_line2 2nd input of the sound card
Sound_mixer_line3 3rd input of the sound card

Table 1 Mixer Commands

Adjusting the input gain and output gain of the sound card is one of the main functions of the mixer, and most of the current sound cards use a 8-bit or 16-bit gain controller, but as programmers do not need to care about this because the sound card driver is responsible for converting them into percentages, This means that both the input gain and the output gain range from 0 to 100. When you are programming a mixer, you can use the Sound_mixer_read macro to read the gain size of the mix channel, such as when you get the input gain of the microphone, you can use the following code:

int Vol;ioctl (FD, Sound_mixer_read (sound_mixer_mic), &vol);p rintf ("MIC gain is at%d%%\n", vol);

For mono devices with only one mix channel, the returned gain size is kept in the low byte. For dual-channel devices that support multiple mixing channels, the returned gain size actually consists of two parts representing the values of the left and right two channels, where the low byte holds the volume of the left channel, while the high byte holds the volume of the right channel. The following code can sequentially extract the gain size of the left and right channels from the return value:

int left, Right;left = Vol & 0xff;right = (Vol & 0xff00) >> 8;printf ("left gain is%d percent, right gain is%d% %\n ", left, right);

Similarly, if you want to set the gain size of the mix channel, you can do so by sound_mixer_write the macro, which is basically the same as when you get the gain value, for example, the following statement can be used to set the microphone's input gain:

vol = (right << 8) + left;ioctl (FD, Sound_mixer_write (sound_mixer_mic), &vol);

When writing a practical audio program, the mixer is an object that needs to be considered when it comes to compatibility, because the mixer resources provided by different sound cards are differentiated. The sound card driver provides multiple IOCTL system calls to obtain information about the mixer, which typically returns an integer bitmask (bitmask), where each bit represents a specific mix channel, and if the corresponding bit is 1, the corresponding mixing channel is available. For example, with the bitmask returned by Sound_mixer_read_devmask, you can query each mix channel that can be supported by the sound card, and the bitmask returned by the Sound_mixer_read_recmas can be queried for each channel that can be used as the recording source. The following code can be used to check if the CD input is a valid mix channel:

  IOCTL (FD, Sound_mixer_read_devmask, &devmask); if (Devmask & SOUND_MIXER_CD)  printf ("The CD input is Supported ");

If you further want to know if it is a valid recording source, you can use the following statement:

IOCTL (FD, Sound_mixer_read_recmask, &recmask); if (Recmask & SOUND_MIXER_CD)  printf ("The CD input can be a rec Ording source ");

At present, most sound cards provide multiple recording sources, the SOUND_MIXER_READ_RECSRC can be used to query the current recording source, the same time can use several recording source is determined by the sound card hardware. Similarly, you can use SOUND_MIXER_WRITE_RECSRC to set the recording source that the sound card is currently using, such as the following code to use the CD input as the audio source for the sound card:

Devmask = Sound_mixer_cd;ioctl (FD, Sound_mixer_write_devmask, &devmask);

In addition, all mixing channels have mono and two-channel differences, and if you need to know which mixing channels provide support for stereo, you can get them through Sound_mixer_read_stereodevs.

4.3 Audio playback frame

The following is a basic framework for sound recording and playback using a DSP device on a sound card, its function is to record a few seconds of audio data, store it in the memory buffer, and then replay, all of its functions are read and write/DEV/DSP device files to complete:

/* * SOUND.C */#include <unistd.h> #include <fcntl.h> #include <sys/types.h> #include <sys/ioctl.h > #include <stdlib.h> #include <stdio.h> #include <linux/soundcard.h> #define LENGTH 3/* Storage seconds */#def INE Rate 8000/* Sampling frequency */#define SIZE 8/* Quantization bit */#define CHANNELS 1/* channel number *//* memory buffer for holding digital audio data */unsigned Char   Buf[length*rate*size*channels/8];int Main () {int fd;/* sound device file descriptor */int arg;/* parameter for IOCTL call */INT status;  /* return value for System call */* open sound device */FD = open ("/DEV/DSP", O_RDWR);    if (FD < 0) {perror ("open OF/DEV/DSP failed");  Exit (1);  }/* Sets the number of quantization bits when sampling */arg = SIZE;  Status = IOCTL (FD, Sound_pcm_write_bits, &arg);  if (status = =-1) perror ("Sound_pcm_write_bits ioctl failed");  if (arg! = size) perror ("Unable to set sample SIZE");   /* Set the number of channels when sampling */arg = CHANNELS;  Status = IOCTL (FD, Sound_pcm_write_channels, &arg);  if (status = =-1) perror ("Sound_pcm_write_channels ioctl failed");  if (arg! = CHANNELS)  Perror ("Unable to set number of channels");  /* Set sampling frequency at sampling */arg = rate;  Status = IOCTL (FD, Sound_pcm_write_rate, &arg);  if (status = =-1) perror ("Sound_pcm_write_write ioctl failed");    /* Loop until CONTROL-C */while (1) {printf ("Say something:\n") is pressed; Status = Read (FD, buf, sizeof (BUF));    /* Recording */if (Status! = sizeof (BUF)) perror ("read wrong number of bytes");    printf ("You said:\n"); Status = Write (FD, buf, sizeof (BUF));    /* Replay */if (Status! = sizeof (BUF)) perror ("wrote wrong number of bytes");     /* Wait for playback to end before continuing recording * * status = IOCTL (FD, Sound_pcm_sync, 0);  if (status = =-1) perror ("Sound_pcm_sync ioctl failed"); }}
4.4 Mixer Frame

The following is a basic framework for programming the mixer, which can be used to adjust the gain of various mixing channels, all of which are done by reading and writing/dev/mixer device files:

/* * MIXER.C */#include <unistd.h> #include <stdlib.h> #include <stdio.h> #include <sys/ioctl.h> #include <fcntl.h> #include <linux/soundcard.h>/* is used to store the names of all available mixing devices */const char *sound_device_names[] =                  Sound_device_names;int FD; /* The file descriptor corresponding to the mixing device */int devmask, Stereodevs;  /* The bitmap mask corresponding to the mixer information */char *name;/* shows how the command is used and all the available mixing devices */void usage () {int i; fprintf (stderr, "Usage:%s <device> <left-gain%%> <right-gain%%>\n" "%s <device> <gain%  %>\n\n "where <device> is one of:\n", name, name); for (i = 0; i < sound_mixer_nrdevices; i++) if ((1 << i) & devmask)/* Only valid mixer devices are shown */fprintf (stderr  , "%s", Sound_device_names[i]);  fprintf (stderr, "\ n"); Exit (1);}  int main (int argc, char *argv[]) {int left, right, level;              /* Gain settings */int status;              /* return value of system call */INT device;               /* Mixer Device selected */char *dev;  /* Name of the mixer device */int i;  name = Argv[0]; /* Open the mix in read-only modeDevice */fd = open ("/dev/mixer", o_rdonly);    if (fd = =-1) {perror ("Unable to open/dev/mixer");  Exit (1);  }/* Get the information you need * * status = IOCTL (FD, Sound_mixer_read_devmask, &devmask);  if (status = =-1) perror ("Sound_mixer_read_devmask ioctl failed");  Status = IOCTL (FD, Sound_mixer_read_stereodevs, &stereodevs);  if (status = =-1) perror ("Sound_mixer_read_stereodevs ioctl failed");  /* Check User input */if (argc! = 3 && argc! = 4) usage ();  /* Save the user input mixer name */dev = argv[1]; /* Determine which mixing device to use */for (i = 0; i < sound_mixer_nrdevices; i++) if ((1 << i) & Devmask &&!STRC  MP (Dev, sound_device_names[i])) break;    if (i = = sound_mixer_nrdevices) {/* did not find a match */fprintf (stderr, "%s is not a valid MIXER device\n", Dev);  Usage ();  }/* Find a valid mixing device */device = i;    /* Get the gain value */if (argc = = 4) {/* Left and right channels are given */= Atoi (argv[2]);  right = Atoi (argv[3]);    } else {/* Left and right channels are set to equal */Ieft = Atoi (argv[2]); right = Atoi (aRGV[2]); */* Give warning message to non-stereo device */if (left! = right) &&! (  (1 << i) & Stereodevs) {fprintf (stderr, "warning:%s is not a stereo device\n", dev);    }/* Combine the values of two channels into the same variable */level = (right << 8) + left;  /* Set gain */status = IOCTL (FD, Mixer_write (device), &level);    if (status = =-1) {perror ("Mixer_write ioctl failed");  Exit (1);  }/* Get the gain of the left and right channels returned from the drive * * = level & 0xFF;  right = (level & 0xff00) >> 8;  /* Displays the actual set gain */fprintf (stderr, "%s gain set to%d%%/%d%%\n", Dev, left, right);  /* Close the MIXER device */close (FD); return 0;}

Once you have compiled the above program, do it again with no parameters, and all the available mix channels on the sound card are listed:

[Email protected] sound]$/mixerusage:./mixer <device> <left-gain%> <right-gain%>       /mixer <device> <gain%> Where <device> is one of:vol PCM Speaker Line mic CD Igain line1 Phin Video

You can then easily set the gain size for each mix channel, such as the following command to set the gain of the left and right channels of the CD input to 80% and 90%, respectively:

[[email protected] sound]$./mixer CD 90CD gain set to 80%/90%

Back to top of page

V. Summary

With the gradual deepening of multimedia application under Linux platform, the use of digital audio will be more and more extensive. Although digital audio involves a lot of concepts, the most basic audio programming under Linux is not very complex, the key is to learn how to interact with the sound card drivers such as OSS or ALSA, and how to take full advantage of the various features they provide, and familiarize yourself with some of the most basic audio programming frameworks and Mode is great for beginners.

Resources
    • 1. OSS is the first sound card driver on Linux, and http://www.opensound.com is its core web site, where many OSS-related information can be learned.
    • 2. Alsa is a widely used Linux sound card driver, and provides some library functions to simplify the writing of audio programs, on its official website http://www.alsa-project.org/can learn a lot of ALSA information, And be able to download to the latest drivers and tools software.
    • 3. Ken c. Pohlmann, Sufi translation, digital audio Principles and Applications (fourth edition), Beijing: Electronic Industry Press, 2002
    • 4. Zhongyu, multimedia technology and its application, Beijing: Machinery Industry Press, 2003

Linux Audio Programming Guide (GO)

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.