Android calls the built-in recording audio program
Android has its own Audio recording program. You can specify an Action MediaStore. Audio. Media. RECORD_SOUND_ACTION Intent.
Start it. Then, in the onActivityResult () method, retrieve the Intent Data, which is the URI of the recorded audio.
Java code:
Copy codeThe Code is as follows: package eoe. demo;
Import android. app. Activity;
Import android. content. Intent;
Import android.net. Uri;
Import android. OS. Bundle;
Import android. provider. MediaStore;
Import android. view. View;
Import android. widget. Toast;
/**
* The example shows how to call the Android app to complete Audio input.
* In fact, it is very simple. We need to specify a MediaStore. Audio. Media. RECORD_SOUND_ACTION Action to start it.
* The returned Data is the URI of the recorded audio.
*
* In the above method, the flexibility is not high enough. We can use the MediaRecorder class to implement our own audio recording program.
* MediaRecorder can be used to record audio or video.
* After a MediaRecorder instance is created, you need to call setAudioSource and setAudioEncoder to initialize
* In general, before preparing the recording, we also need to call the setOutputFormat () method to determine the audio format to use and call
* SetOutputFile () is used to specify the file for storing the recorded content.
*
* The call sequence of these methods is: setAudioSource, setOutputFormat, setAudioEncoder, setOutputFile
*
*
*
* @ Author Administrator
*
*/
Public class AudioRecordDemo extends Activity {
Public void onCreate (Bundle savedInstanceState ){
Super. onCreate (savedInstanceState );
SetContentView (R. layout. audio_record );
}
Public void onActivityResult (int requestCode, int resultCode, Intent data ){
// Super. onActivityResult (requestCode, resultCode, data );
// Here we can get the Uri of the recorded audio and play the audio. The returned Uri is displayed.
If (resultCode = RESULT_ OK ){
Uri audioPath = data. getData ();
Toast. makeText (this, audioPath. toString (), Toast. LENGTH_LONG). show ();
}
}
Public void onClick (View v ){
Int id = v. getId ();
Switch (id ){
Case R. id. btn1: // call the Android Audio Recording Application
Intent intent = new Intent (MediaStore. Audio. Media. RECORD_SOUND_ACTION );
StartActivityForResult (intent, 0 );
Break;
Case R. id. btn2:
// Use the MediaRecorder class to implement your own audio recording program
Intent intent2 = new Intent ();
Intent2.setClass (this, MyAudioRecord. class );
StartActivityForResult (intent2, 1 );
Break;
Case R. id. btn3:
// Use the AudioRecord class to implement your own audio recording program
Intent intent3 = new Intent ();
Intent3.setClass (this, MyAudioRecord2.class );
StartActivityForResult (intent3, 2 );
Break;
}
}
}
Android audio Introduction
Android has been transplanted recently. When Android can run on a device, the first thing that comes to mind is to run the audio device. "No sound, no good drama ". This article briefly introduces the Android audio adaptation layer.
The world's audio devices are ever-changing, and Android cannot provide support for every device. Android defines a framework to adapt to underlying audio devices. The adaptation layer is defined:
Java code:Copy codeThe Code is as follows: hardware/libhardware_legacy/include/hardware_legacy/AudioHardwareInterface. h
The audio device at the underlying level must inherit the AudioStreamOut, AudioStreamIn, AudioHardwareInterface, and other classes defined in the file, and implement the createAudioHardware function.
Next, let's take a look at the Android code for creating an audio device. The code is located:
Java code:Copy codeThe Code is as follows: frameworks/base/libs/audioflinger/AudioHardwareInterface. cpp
The file has the following code:
Java code:Copy codeThe Code is as follows: AudioHardwareInterface * AudioHardwareInterface: create ()
{
/*
* FIXME: This code needs to instantiate the correct audio device
* Interface. For now-we use compile-time switches.
*/
AudioHardwareInterface * hw = 0;
Char value [PROPERTY_VALUE_MAX];
# Ifdef GENERIC_AUDIO
Hw = new AudioHardwareGeneric ();
# Else
// Use this simulator if it is running in Simulation
If (property_get ("ro. kernel. qemu", value, 0 )){
LOGD ("Running in emulation-using generic audio driver ");
Hw = new AudioHardwareGeneric ();
}
Else {
LOGV ("Creating Vendor Specific AudioHardware ");
Hw = createAudioHardware ();
}
# Endif
If (hw-> initCheck ()! = NO_ERROR ){
LOGW ("Using stubbed audio hardware. No sound will be produced .");
Delete hw;
Hw = new AudioHardwareStub ();
}
# Ifdef WITH_A2DP
Hw = new A2dpAudioInterface (hw );
# Endif
# Ifdef ENABLE_AUDIO_DUMP
Recorded in the file.
LOGV ("opening PCM dump interface ");
Hw = new AudioDumpInterface (hw); // replace interface
# Endif
Return hw;
}
From the code, we can see that if the GENERIC_AUDIO macro is defined, AudioHardwareGeneric will be created. If it is a simulator, AudioHardwareGeneric will not be able to initialize and then create AudioHardwareStub. Both classes are the Adaptation Layer of the Audio device, which is provided by Android by default. All simulators use AudioHardwareStub without sound output. All devices use AudioHardwareGeneric because GENERIC_AUDIO is set by default.
Generally, we only care about the implementation of AudioHardwareGeneric. Who will debug the sound for the simulator? I don't have to worry about it anyway. First of all, this audio adaptation layer comes with Android, which can ensure that your audio device runs normally, but cannot exert the best performance of the device. You will understand it through the description below. The definition of AudioHardwareGeneric is located:
Java code:Copy codeThe Code is as follows: frameworks/base/libs/audioflinger/AudioHardwareGeneric. cpp
The above section describes the audio usage by eoe. If you have any questions, read the android source code to help you understand the audio.
Let's take a look.:
Copy codeThe Code is as follows: public class FFTActivity extends Activity implements OnClickListener {
Private Button button;
Private ImageView imageView;
Private int frequency = 8000;
Private int channelConfiguration = AudioFormat. CHANNEL_CONFIGURATION_MONO;
Private int audioEncoding = AudioFormat. ENCODING_PCM_16BIT;
Private RealDoubleFFT transformer;
Private int block size = 256;
Private boolean started = false;
Private Canvas canvas;
Private Paint paint;
Private Bitmap bitmap;
@ Override
Protected void onCreate (Bundle savedInstanceState ){
Super. onCreate (savedInstanceState );
SetContentView (R. layout. fft );
Button = (Button) findViewById (R. id. fft_button );
Button. setOnClickListener (this );
ImageView = (ImageView) findViewById (R. id. fft_imageView );
Transformer = new RealDoubleFFT (blockSize );
Bitmap = Bitmap. createBitmap (256,100, Bitmap. Config. ARGB_8888 );
Canvas = new Canvas (bitmap );
Paint = new Paint ();
Paint. setColor (Color. GREEN );
ImageView. setImageBitmap (bitmap );
}
Private class RecordAudio extends AsyncTask <Void, double [], Void> {
@ Override
Protected Void doInBackground (Void... params ){
Int bufferSize = AudioRecord. getMinBufferSize (frequency,
ChannelConfiguration, audioEncoding );
AudioRecord audioRecord = new AudioRecord (
MediaRecorder. AudioSource. MIC, frequency,
ChannelConfiguration, audioEncoding, bufferSize );
Short [] buffer = new short [blockSize];
Double [] toTransform = new double [blockSize];
AudioRecord. startRecording ();
While (started ){
// Read the record data to the buffer, but I think it may be more appropriate to write the data.
Int bufferResult = audioRecord. read (buffer, 0, blockSize );
For (int I = 0; I <bufferResult; I ++ ){
ToTransform <I >= (double) buffer <I>/Short. MAX_VALUE;
}
Transformer. ft (toTransform );
PublishProgress (toTransform );
}
AudioRecord. stop ();
Return null;
}
@ Override
Protected void onProgressUpdate (double []... values ){
Super. onProgressUpdate (values );
Canvas. drawColor (Color. BLACK );
For (int I = 0; I <values [0]. length; I ++ ){
Int x = I;
Int downy = (int) (100-(values [0] <I>) * 10 );
Intupy = 100;
Canvas. drawLine (x, downy, x, upy, paint );
}
ImageView. invalidate ();
}
}
@ Override
Public void onClick (View v ){
Started = true;
New recordaudio(.exe cute ();
}
}
The principle of android audio visualization is to use discrete Fourier transform, but do not worry about poor mathematics. There is an open-source java Discrete Fourier Transform code !! Go to www.netlib.org/fftpack/jfftpack.tgzand drag the internal ourcedirectory to the src directory !!