Android realizes real-time voice data acquisition and playback _android

Source: Internet
Author: User

The most recently done project is to collect and send the voice in real time, the other side receives and plays the correlation in real time, the following records the implementation of the core code.
Many Android developers should know that Android has a Mediarecorder object and a MediaPlayer object for recording and playing audio. The disadvantage is that they can not be collected and sent in real time, so we can only use Audiorecord and audiotrack to achieve.
Remember to declare permission:

<uses-permission android:name= "Android.permission.MODIFY_AUDIO_SETTINGS"/>
<uses-permission Android : Name= "Android.permission.RECORD_AUDIO" >


First, Audiorecord implementation of the core code described below:

1, first stated the relevant recording configuration parameters

Private Audiorecord audiorecord;//Recording object
private int frequence = 8000;//sampling rate 8000
private int channelinconfig = Aud ioformat.channel_configuration_mono;//defines the sampling channel
private int audioencoding = audioformat.encoding_pcm_16bit;// Defines the audio encoding (16-bit)
private byte[] buffer = null;//recorded buffer array


2. Before we start recording, we need to initialize the Audiorecord class.

According to a few defined configurations, to get the appropriate buffer size
//int buffersize = A;
int buffersize = Audiorecord.getminbuffersize (frequence,
    channelinconfig, audioencoding);
Instantiate Audiorecord
Audiorecord = new Audiorecord (MediaRecorder.AudioSource.MIC,
    frequence, Channelinconfig, Audioencoding, buffersize);
Definition buffer array Buffer
= new Byte[buffersize];

3, ready to start recording, use the loop to continuously read data.

Audiorecord.startrecording ()//start recording
isrecording = true;//Set recording mark to True

//start recording while
(isrecording) {
//The recorded content is placed in buffer, and result represents the storage length
int result = audiorecord.read (buffer, 0, buffer.length);
/*.....result the length of the data recorded in buffer (seemingly basically 640). The
rest is to deal with the buffer, is sent out or directly play, this casually you. *

//////////////////////
if (Audiorecord!= null) {
  audiorecord.stop ();
}


Second, the Audiotrack code implementation is described as follows:

1, the statement playback related configuration.

Private Audiotrack track = null;//Recording file play object
private int frequence = 8000;//Sample rate 8000
private int channelinconfig = audioformat.channel_configuration_mono;//defines the sampling channel
private int audioencoding = audioformat.encoding_pcm_16bit;// Define audio Encoding (16-bit)
private int buffersize = -1;//playback buffer size


2. Initialize the Audiotrack object (initialized once, the object can be reused)

Get buffer size
buffersize = audiotrack.getminbuffersize (frequence, Channelinconfig,
    audioencoding);
Instance Audiotrack
track = new Audiotrack (Audiomanager.stream_music, frequence,
    Channelinconfig, Audioencoding, buffersize,
    audiotrack.mode_stream);


3, use Audiotrack to play voice data.

Write the speech data.
track.write (dataarray, buffer, Len);


Question one:

Since the current project is real-time collection, real-time delivery, so need to take into account the size of the package, after testing, we use 160 byte as a packet delivery can be achieved a good play effect (that is, a buffer split into four sent). The processing code is as follows:

Callback the data through the Listener interface
if (audiorecordingcallback!= null) {
  int offset = result% Max_data_length > 0? 1:0;
  Splits a buffer into small packets max_data_length the maximum byte for the packet for
  (int i = 0; i < result/max_data_length + offset; i++) {
    I NT length = max_data_length;
    if ((i + 1) * max_data_length > Result) {
      LENGTH = result-i * max_data_length;
    }
  Write to callback interface
  audiorecordingcallback.onrecording (buffer, i
      * max_data_length, LENGTH);
  }



Question two:

Sometimes the transmission came over to play the sound will be a card, in order to solve such problems, the temporary use of dual-buffer speech mechanism to solve the problem optimization is obvious. The code and schematic diagram are as follows:

The above is the entire content of this article, I hope to help you learn, but also hope that we support the cloud habitat community.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.