Android provides the audiorecord and mediarecorder functions for recording. audiorecord is an audio stream that reads MIC data and can analyze stream data while recording; mediarecorder can directly store the MIC data to a file and encode the data (such as AMR and MP3 ).
First, add your application to the permission (whether you are using audiorecord or mediarecorder ):
<Uses-Permission Android: Name = "android. Permission. record_audio"/>
Then, we will introduce the usage of the two separately.
《! -- Audiorecord --"
1. Create a recording sampling class and implement the interface:
Public class micsensor implements audiorecord. onrecordpositionupdatelistener
2. audiorecord initialization:
Public audiorecord (INT audiosource, int samplerateinhz, int channelconfig, int audioformat, int buffersizeinbytes)
Audiosource: Specifies the recording source (for example, mediarecorder. audiosource. Mic)
Samplerateinhz: the default sampling frequency, in Hz. (Common examples include 44100Hz, 22050Hz, 16000Hz, 11025Hz, and 8000Hz. Some people say that 44100hz is the sampling frequency that can be used by all manufacturers on Android phones, however, I did not use it on Samsung i9000, And the 8000hz test seems more reliable)
Channelconfig: Describes the audio channel settings. (Here I used audioformat. channel_configuration_mono)
Audioformat: audio data supports formats. (This seems to be related to the channel. The 16-bit pulse code modulation recording should be called dual-channel, while the 8-bit pulse code modulation recording is a single channel. Audioformat. encoding_pcm_16bit, audioformat. encoding_pcm_8bit)
Buffersizeinbytes: The total number (in bytes) of audio data written to the buffer during recording ). The new audio data read from the buffer zone is always smaller than this value. Getminbuffersize (INT, Int, INT) returns the minimum buffer after the audiorecord instance is created successfully. If the value is smaller than getminbuffersize (), initialization fails.
3. After successful initialization, the recording audiorecord. startrecording () can be started ()
4. Write a Thread class to read the recording data into the buffer for analysis.
Short [] buffer = new short [buffersize]; // The short type corresponds to the 16bit audio data format, and the byte type corresponds to the 8bit
Audiorecord. Read (buffer, 0, buffersize); // The returned value is an int-type data length value.
5. here we need to describe the data in the buffer:
The data read in this way is data in the time domain and directly used for computing has no practical significance. Data in the time domain must be converted into data in the frequency domain before it can be used for computing.
The frequency domain (frequency domain) is used to analyze the frequency-related part of a function or signal, rather than the time-related part.
A function or signal can be converted between the time domain and the frequency domain through a pair of mathematical operations. For example, Fourier transform can convert a time domain signal to the corresponding amplitude and phase at different frequencies, and its spectrum is the performance of the time domain signal in the frequency domain, the anti-Fourier transform can convert the spectrum back to the time-domain signal.
Signals in the time domain can show how signals change over time, while signals in the frequency domain (generally called the spectrum) can show the frequencies at which signals are distributed and their proportions. In addition to the size of each frequency, the frequency-domain representation also has the phase of each frequency. With the information of the size and phase, the string waves of each frequency can be given different sizes and phases, after addition, it can be restored to the original signal.
The complex array obtained by Fourier variation is a two-dimensional array. The sum of the squares of the real and imaginary parts is multiplied by the logarithm and then multiplied by 10, which is roughly equal to the decibels of the volume.
《! -- Mediarecorder --"
Compared with audiorecord, mediarecorder provides simpler APIs.
[Java]View
Plaincopy
- Mediarecorder = new mediarecorder ();
- Mediarecorder. setaudiosource (mediarecorder. audiosource. Mic );
- Mediarecorder. setoutputformat (mediarecorder. outputformat. three_gpp );
- Mediarecorder. setaudioencoder (mediarecorder. audioencoder. amr_nb );
- Mediarecorder. setoutputfile ("/dev/null ");
Set the attributes of mediarecorder, and call mediarecorder. getmaxamplder () through a thread ();
The obtained is the maximum instantaneous amplitude, which is expressed by multiplying the logarithm by 10.
Finally, it should be noted that the hardware customized by Android mobile phone manufacturers is different, so the value obtained by MIC can only be "characterized", and cannot be taken as the real basis. Although they are smart phones, they are still mobile phones, and robots are not people! Haha...
By the way, each cell phone mic is protected by capacitance when converting sound signals and electrical signals, so that it is not vulnerable to damage due to the noise of the external environment. Therefore, we are not easy to accept the sound of ultrasound and secondary sound waves, and our mobile phones will not be in-ear.