This article focuses on how to capture a single frame of audio data on the Android platform. Before reading this article, it is recommended to read my previous article, "Android Audio Development (1): Fundamentals", because in the audio development process, often involves these basic knowledge, after mastering these important concepts, the development process of many parameters and processes will be more easy to understand.
Android SDK provides two sets of audio acquisition APIs, namely: Mediarecorder and Audiorecord, the former is a more upper-level API, it can directly encode the phone microphone input audio data encoding compression (such as AMR, MP3, etc.) coexist into a file , while the latter is closer to the bottom, allowing for more free and flexible control, and the original frame-frame PCM audio data can be obtained.
If you want to simply make a recorder, recorded as an audio file, it is recommended to use Mediarecorder, and if you need to do further algorithmic processing of audio, or the use of third-party encoding library for compression, as well as network transmission and other applications, it is recommended to use Audiorecord, in fact The Mediarecorder is also called Audiorecord to interact with the Audioflinger of the Android Framework layer.
Audio development, more widely used not only in local recording, so we need to focus on how to use the more basic Audiorecord API to capture audio data (note that the use of the audio data collected is the original PCM format, want to compress into a format such as MP3,AAC, You also need to specifically call the encoder for encoding).
1. Audiorecord Workflow
First, let's look at the Audiorecord workflow:
(1) Configure parameters to initialize the internal audio buffers
(2) Start collection
(3) Need a thread, constantly from the buffer of Audiorecord audio data "read" out, note that this process must be timely, otherwise there will be "overrun" error, the error in audio development is more common, meaning that the application layer does not "take away" audio data in a timely manner, Causes an internal audio buffer overflow.
(4) Stop collecting and release resources
2. Parameter configuration of Audiorecord
650) this.width=650; "src=" Http://s3.51cto.com/wyfs02/M02/7D/2C/wKioL1bhXMew-y-lAAFNssMMHH8488.png "title=" 5e8a7d99-385a-4f9a-bf22-b5ff880d80cb.png "alt=" Wkiol1bhxmew-y-laafnssmmhh8488.png "/>
The above is the constructor of Audiorecord, we can find that it is mainly based on the constructor to configure the acquisition parameters, let us one by one to explain the meaning of these parameters (recommended in comparison with my previous article to understand):
(1) Audiosource
This parameter refers to the input source of the audio acquisition, and the optional values are defined as constants in the Mediarecorder.audiosource class, and the values commonly used include: Default, Voice_recognition (for speech recognition, Equivalent to default), mic (input by phone mic), voice_communication (for VoIP applications), etc.
(2) Samplerateinhz
Sampling rate, note that the current 44100Hz is the only one that can guarantee a sample rate compatible with all Android phones.
(3) Channelconfig
The configuration of the number of channels, the optional value is defined in the form of constants in the Audioformat class, commonly used is Channel_in_mono (single channel), Channel_in_stereo (dual channel)
(4) Audioformat
This parameter is used to configure "data bit width", the optional value is also defined in the form of constants in the Audioformat class, commonly used is encoding_pcm_16bit (16bit), Encoding_pcm_8bit (8bit), note that The former is guaranteed to be compatible with all Android phones.
(5) Buffersizeinbytes
This is the most difficult to understand and the most important parameter, it is configured Audiorecord internal audio buffer size, the value of the buffer can not be less than one frame "audio frame" size, and the previous article introduced, the size of a frame audio frame is calculated as follows:
int size = Sample rate x bit width x sample time X number of channels
Sampling time generally take 2.5ms~120ms between, by the manufacturer or the specific application decision, we can actually infer that the shorter the sampling time of each frame, the resulting delay should be smaller, of course, fragmented data will be more.
In Android development, the Audiorecord class provides a function that helps you determine this buffersizeinbytes, with the following prototype:
int getminbuffersize (int samplerateinhz, int channelconfig, int audioformat);
Different manufacturers of the underlying implementation is not the same, but nothing is based on the above formula to get a frame size, the size of the audio buffer must be a frame size of 2~n times, interested friends can continue to delve into the source code exploration.
In practical development, it is strongly recommended that the function calculate the buffersizeinbytes that need to be passed in, instead of manually calculating it yourself.
3. Audio Capture Thread
Once the Audiorecord object has been created, it is possible to start the acquisition of audio data and control the start/stop of the acquisition with the following two functions:
Audiorecord.startrecording ();
Audiorecord.stop ();
Once the acquisition is started, the audio must be fetched as soon as possible through the thread loop, or the system will appear overrun, and the interface for the read data is called:
Audiorecord.read (byte[] audiodata, int offsetinbytes, int sizeinbytes);
4. Sample Code
I have simply encapsulated the interface of the Audiorecord class and provided a audiocapturer class that can be downloaded to my github: https://github.com/Jhuster/Android/blob/master/ Audio/audiocapturer.java
Here's a copy:
/* * copyright notice * copyright (C) 2016, jhuster <[email protected]> * https://github.com/jhuster/android * * @license under the apache license, version 2.0 * * @file audiocapturer.java * * @version 1.0 * @author Jhuster * @date 2016/03/10 */import Android.media.audioformat;import android.media.audiorecord;import android.media.mediarecorder;import android.util.log;public class audiocapturer { private static final String TAG = "Audiocapturer"; private static Final int default_source = mediarecorder.audiosource.mic; private static final int default_sample_rate = 44100; private static final int Default_channel_config = audioformat.channel_in_mono; private static final int DEFAULT_AUDIO_FORMAT = AudioFormat.ENCODING_PCM_16BIT; private audiorecord maudiorecord; private int mminbuffersize = 0; private Thread mCaptureThread; Private boolean miscapturestarted = false; private volatile boolean mIsLoopExit = false; private Onaudioframecapturedlistener maudioframecapturedlistener; public interface onaudioframecapturedlistener { &nbsP; public void onaudioframecaptured (Byte[] audioData); } public boolean iscapturestarted () { return mIsCaptureStarted; } public void setonaudioframecapturedlistener (Onaudioframecapturedlistener listener) { mAudioFrameCapturedListener = listener; } public boolean startcapture () { return startcapture (Default_source, default_sample_rate, default_ Channel_config, default_audio_format); } public boolean startcapture (Int audioSource, int samplerateinhz, int channelconfig, int audioformat) { if (miscapturestarted) { &NBSP;LOG.E (tag, "capture already started !"); return false; } Mminbuffersize = audiorecord.getminbuffersize (Samplerateinhz,channelconfig,audioformat); if (Mminbuffersize == audiorecord.error_bad_value) { &NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;LOG.E (TAG, "Invalid Parameter ! "); return false; } &nbSP;&NBSP;&NBSP;LOG.D (tag , "getminbuffersize = " +mminbuffersize+ " bytes !"); maudiorecord = new audiorecord (AudioSource, samplerateinhz,channelconfig,audioformat,mminbuffersize); if (Maudiorecord.getstate () == audiorecord.state_uninitialized) { &NBSP;&NBSP;&NBSP;&NBSP;LOG.E (tag, "audiorecord initialize fail !"); return false; } maudiorecord.startrecording (); Misloopexit = false; mcapturethread = new thread (New audiocapturerunnable ()); Mcapturethread.start (); &nBSP;MISCAPTURESTARTED&NBSP;=&NBSP;TRUE;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;LOG.D (TAG, "Start audio capture success ! "); return true; } public void stopcapture () { if (! miscapturestarted) { return; } misloopexit = false; try { mcapturethread.interrupt (); mcapturethread.join (+); } catch (interruptedexception e)   { e.printstacktrace (); } if ( Maudiorecord.getrecordingstate () == audiorecord.recordstate_recording) { maudiorecord.stop (); } maudiorecord.release (); mIsCaptureStarted = false; MAUDIOFRAMECAPTUREDLISTENER&NBSP;=&NBSP;NULL;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;LOG.D (TAG, " Stop audio capture success ! "); } private class audiocapturerunnable implements Runnable { @Override public void run () { while (!misloopexit) { byte[] buffer = new byte[mMinBufferSize]; int ret = maudiorecord.read (buffer, 0, mminbuffersize); if (ret == audiorecord.error_invalid_ Operation) { &NBSP;&NBSP;&NBSP;&NBSP;&NBSP;LOG.E (tag , "error error_invalid_operation"); } else if (Ret == audiorecord.error_bad_value) { &NBSP;&NBSP;&NBSP;LOG.E (tag , "Error error_bad_value"); } else if (ret == audiorecord.error_ invalid_operation) { &NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;LOG.E (tag , "error error_invalid_operation"); } else { if ( Maudioframecapturedlistener != null) { maudioframecapturedlistener.onaudioframecaptured (buffer); } &NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;LOG.D ( tag , "ok, captured " +ret+ " bytes !"); } } } }}
Before using, be careful to add the following permissions:
<uses-permission android:name= "Android.permission.RECORD_AUDIO"/>
5. Summary
Audio development of the knowledge point is actually quite a lot of, an article can not be described in detail, therefore, not comprehensive and detailed place, please search professional information for in-depth understanding. There is no clear place in the article welcome message or letter [email protected] exchange, or follow my Sina Weibo @ Lu _ June or the public number @Jhuster get the latest articles and information.
650) this.width=650; "src=" http://s5.51cto.com/wyfs02/M01/7D/2E/wKiom1bhXf-Q_AGbAACb8XAe6Uo104.jpg "title=" Weixin _jhuster.jpg "alt=" Wkiom1bhxf-q_agbaacb8xae6uo104.jpg "/>
This article is from the "jhuster column" blog, be sure to keep this source http://ticktick.blog.51cto.com/823160/1749719
Android Audio Development (2): How to capture a frame of audio