A brief analysis of Android audio

Source: Internet
Author: User
Tags apm

Android Audio Analytics

1 Android Audio System framework


? Multimedia framework:

The multimedia framework is responsible for encapsulating the playback/recording class, connecting the Android audio decoding library for audio software decoding, and connecting the high-pass Openmaxil interface. The upper app directly invokes the multimedia framework interface for audio playback and recording.

? Audioservice

Register for an Android broadcast event, get broadcast events such as Bluetooth headset plug-and-go, USB audio device plug-and-unplug, and call the audio System interface to achieve sound control.

? Audiosystem

The class contains the underlying definition of audio, including the audio stream ( Stream ) type, routing ( Routing ) type, audio input ( Device ) definition, audio device declaration ( device states ) and some audio interface functions.

? Audio Flinger (AF):

Audioflinger is one of the two services of Android audio and the other is Audiopolicyservice, both of which are instantiated by mediaserver (Main _mediaserver.cpp). Audioflinger communicates directly with the HAL layer, and all audio operations need to be controlled by Audioflinger. The main features are:

1. Manage all audio input/output devices through the interface of the Libaudio library;

2. Realize the mix/input/output of PCM data;

3. Volume control during playback

? Audio Policy Manager (APM):

APM is mainly used for audio streaming routing strategy, as well as user scene switching, and finally, AF communication, with AF and Hal Communication control. Audiopolicyservice.java also obtains the AF instance through an interface in the Audiosystem, such as Audiosystem::get_audio_flinger (). The control of the volume is also called the interface in Audiosystem (such as Audiosystem::setvoicevolume (Data->mvolume)) to control, and eventually as is the interface to call AF to implement.

? Audiotrack (at):

A class for playing audio that transmits audio data through audio_track_cblk_t and AF shared memory.

? Audiorecord:

The class used for recording.

2 Specific Analysis 2.1 audiotrack (at) analysis

Java layer: Frameworks\base\media\java\android\media\ Audiotrack.java

JNI Layer: Frameworks\base\core\jni\android_media_audiotrack.cpp

Library: Frameworks\av\media\libmedia\audiotrack.cpp

Three files Relationship: Java layer Audiotrack.java call JNI layer android_media_audiotrack.cppnative function, JNI layer specific function again call library AudioTrack.cpp function to implement.

S

In general, Audiotrack.java provides the classes that the app uses to play audio, with JNI and C + + file communication to complete the specific playback function.

Audiotrack.java mainly provides the following methods:

Getminbuffersize (): Calculates the minimum buffer buffer required, based on the sample rate, number of channels, number of sample bits (sampling accuracy) (corresponding to three parameters respectively).

Audiotrack (): constructor, constructs a playback class;

Paly (): Start playing audio;

Write (): Writes the audio data to the Audiotrack;

Stop (): Stop playback;

Pause (): pauses playback;

Release (): Releases the underlying resource.

Audiotrack There are two ways to play audio data, one is static mode (mode_static) and the other is Stream mode (Mode_stream).

? Static

The static mode is that the data is delivered to the receiver at once. The advantage is simple and efficient, only need to do one operation to complete the data transfer, the disadvantage of course is also obvious, not competent for the large amount of audio playback, and is usually used only to play ringtones, system reminders and other small memory operations.

? Streaming

Streaming mode and video playback on the network is similar, that is, the data is constantly passed to the receiver according to certain rules. Theoretically it can be used for any audio playback scenario, but we generally use the following scenarios:

? Audio file too large

? High audio properties, such as high sampling rates and deep data

? Audio data is generated in real time, which can only be used in streaming mode.

There is a variable in the Audiotrack constructor that specifies the size of buffer buffersizeinbytes.

Audiotrack the value of this variable in the native layer will be judged effectively. First, it should be at least equal to or greater than the value returned by Getminbuffersize, and then it must be an integer multiple of the frame size. ( number of frame= channels * Sample bits /8)

For example, in Mode_stream mode, when the Java layer constructs audiotrack, the size of the buffersizeinbytes is set to 9600, and the Write method is called in the native layer to copy data to hardware for playback. The size of each copy is 320. It needs to be copied to 30 times before the sound card is issued. That is, the data needs to fill the buffer before playing. (320*30=9600)

See also:

Android in layman's audio first part audiotrack analysis

http://blog.csdn.net/onetwothreef/article/details/46311985

Dive into the audiotrack of Android audio

Http://www.2cto.com/kf/201410/342659.html

One of the Android audio System: Audiotrack How to Exchange audio data with Audioflinger

http://blog.csdn.net/droidphone/article/details/5941344

2.2 Audioflinger (AF)

is responsible for down-access to the audiohardwareinterface, to achieve the audio PCM data mixing/input/output and volume adjustment, is the bottom and the Android layer bridge.

File location: Frameworks\av\services\audioflinger\ AudioFlinger.cpp

See also:

Android Audio System II: Audioflinger

http://blog.csdn.net/onetwothreef/article/details/46311803

One of the Android audio System: Audiotrack How to Exchange audio data with Audioflinger

http://blog.csdn.net/onetwothreef/article/details/46311645

Android in layman's Audio Part II Audioflinger analysis

http://blog.csdn.net/onetwothreef/article/details/46311851

Useful comments : See http://blog.csdn.net/droidphone/article/details/5951999

According to the comments, when the phone opens two players while playing music, they (different music) is the same mixthread two different audiotrack(Audio track) , and mix output.

2.3 Audiopolicyservice (APS)

Audiopolicyservice mainly accomplishes the following tasks:

    • The Java application layer accesses the services provided by the Audiopolicyservice through the Iaudiopolicyservice interface via JNI
    • Connection Status of input
    • Switching of the system's audio policy (strategy)
    • Settings for volume/audio parameters

Audiopolicymanager

A large part of Audiopolicyservice's management work is done in Audiopolicymanager. Includes volume management, audio policy (strategy) management, input management.

See also:

Android Audio System III: Audiopolicyservice and Audiopolicymanager

http://blog.csdn.net/onetwothreef/article/details/46311725

The Audiopolicyservice of Android audio system

http://blog.csdn.net/onetwothreef/article/details/46311457

Android Source Analysis: Audiopolicy

Http://www.360doc.com/content/13/0815/14/11338643_307326622.shtml

A brief analysis of Android audio

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.